My edit was really minor, but seeing this still makes me happy:
From the redis manifesto:
"We optimize for joy. We believe writing code is a lot of hard work, and the only way it can be worth is by enjoying it. When there is no longer joy in writing code, the best thing to do is stop. To prevent this, we'll avoid taking paths that will make Redis less of a joy to develop."
Amen.
]]>As a developer I've found new appreciation for the adage "a man* is only as good as his word." In business and in life, you can't depend on people who don't follow through on promises, especially when stakes are high.
Amongst programmers, our code is our word. If collaborators can't trust that our code is of high quality, we are effectively a time and energy sink for those around us, forcing on others the task of testing and validating our work. The analogue in the non-tech world would be someone whose promises require second guessing, backup plans, and a non-trivial dose of anxiety.
A developer that pushes high-quality commits, like someone who follows through on his or her word, is a refreshing person to interact with, because the cost of interaction is negligible, making the net value of all contributions significantly higher.
Push good code.
*note: excuse the gender bias in the quote - fortunately women are better with promises and code commits
]]>A case study of the physical world bleeding into the technical . . . and a case study in idiotic StackExchange questions.
I always hated "programming problems" when I was interviewing for jobs. I now also hate them as someone who is trying to hire programmers.
As a candidate, I felt like these kinds of problems had little relationship to real work. How often do I really need to find the number of ways that one could walk down a set of stairs (a classic recursion interview question) or what floor an egg breaks on? (nice one, google) And what am I supposed to do if I already know the answer? (which was often) There are multiple levels of weirdness at play -- not exactly what I call enjoyable.
As an employer, I hate them for an entirely different reason. My objective during an interview is to find out two things: a) is this person smartBy giving them a contrived problem that I know the answer to (and they know I know the answer), I've immediately forced an artificial interrogation-style power dynamic between us and likely amplified their anxiety by a couple orders of magnitude. My goal is to get as far away from that dynamic as possible.
To the best of my ability, I want to simulate an hour or two of work and see whether we can collaborate effectively. Here's the rough approach I've worked out so far, comments are appreciated as I'm pretty new at this myself:
In both cases of reviewing your respective work, you are admitting that you don't know the answer to the problem. In the case of your own problem, you're also admitting that it's something you could use help with. You have now mitigated the power dynamic somewhat, which means the candidate can feel comfortable in throwing out ideas. You've moved from waiting for the "right" solution to asking for ideas on a real problem.
You come away with a good indication of their ability, as well as a decent indication for how well you would work together. And a possible side benefit may be new insight into the problems that you're struggling with. It takes a little extra courage to fly without a net, but that's the whole point - leveling the field so you can actually talk.If this sounds sane to you and you're looking for dev work in San Francisco, please get in touch.
]]>I run into a fairly common problem in rails views when I'm pulling data from multiple tables. To optmize on db access, I use :select to pre-load the data from the joined tables. For example, let's say I want to list all users with their respective cities:
Then I don't have to load the Address model to display a user's city:
The problem is that my user partial is now coupled with my named scope: if I render the partial without using the scope, I'll throw an exception. To avoid the problem, I add an accessor method to the User model that will pull the pre-loaded data if available, and revert to using the Address model:
The partial will be inefficient, but I'd prefer to find the error when profiling view times rather than through exceptions.
]]>
. . . so much for Javascript fallback.
]]>This functionality is included in some ruby gems (Utility Belt seems to be the most popular), but I'm more of a "roll-your-own" kinda guy. Not to mention that the gem is a little overkill for printing out command history.
Here's what I came up with:
]]>When authenticating users via facebook, you have a laundry list of possible permissions to ask for. If you ask for no permissions, the user gets the following login window:
Asking for all permissions at once would be a little intimidating for users . . .
]]>Confirming the email alert I recently received, Google appears to be strategerizing in social with a renewed push for Orkut: it's listed on page 1 of their new products page (despite launching in 2004).
My browsing speed was noticeably & absurdly fast, so I ran a quick check:
My old friend Orkut sent me an email yesterday, the first in a long time . . .
I actually googled this and found a workable but ugly solution:
## view code
<%= date_select('range', 'start_date', :order => [:month, :day, :year])%>
## controller code
@start_date = Date.civil(params[:range][:"start_date(1i)"].to_i,params[:range][:"start_date(2i)"].to_i,params[:range][:"start_date(3i)"].to_i)
I needed to include the hour and minute as well, and didn't want to cram more arguments into the Date.civil call (actually Time.zone.local), so I cleaned up the code a bit.
Hoping to leave the internet a little better for the next guy, I thought I'd post the code.
## view code
<%= datetime_select('range_start', 'date', :order => [:month, :day, :year, :hour,:minute]) %>
# controller code
@start_date = Time.zone.local(*params[:range_start].sort.map(&:last).map(&:to_i))
]]>
credit: http://bigeyedeer.wordpress.com/2008/07/15/this-cartoon-wrote-a-sweary-word-o...
]]>I thought Happy Town had a decent chance of being entertaining:
"Executive producer Josh Appelbaum and others on the show are huge fans of the Twin Peaks"
"Executive producer Scott Rosenberg says he's more a Stephen King fan."
"So if you think it's too much like Twin Peaks, blame them. If you think it's not enough like Twin Peaks, blame me."
Facebook has decided to do away with notifications - the little messages notifying you of things friends did on platform applications, or giving you updates on applications you have installed:
For those that are unfamiliar, Groupon is a collective buying site, similar to Woot and BuyWithMe. Borrowing from the Wikipedia entry:
The Groupon works as an assurance contract using ThePoint's platform: if a certain number of people sign up for the offer, then the deal becomes available to all; if the predetermined minimum is not met, no one gets the deal that day. This reduces risk for retailers, who can treat the coupons as quantity discounts as well as sales tools. Groupon makes money by getting a cut of the deal from the retailers
Sounds great, right? Retailers build sales by offering a one-time discount, contingent on quanitty, and consumers get a good deal on some product or service, with Groupon taking a little cut.
Well, as with anything with even trivial complexity, the devil is in the details.
I was overdue for a dental exam, and my insurance doesn't exactly have stellar dental coverage, and i happened to be checking out Groupon when the deal of the day was for $60 exam/cleaning/xrays at a San Francisco dental office.
Great! Right?
I bought into the deal, and actually waited about a month before booking my appointment. Well, before trying to book my appointment. When I called the office, I was greated not by the receptionist, but by this pre-recorded message:
The following information applies only to our Groupon.com patients that have not yet scheduled with us but who have already called or emailed our office. We are currently repsonding to Groupon.com user calls and emails in the order in which they were received. If you have already called or emailed to schedule your Groupon.com appointment, we ask that you be patient and please not call or email us again.
That's when I decided to run the numbers, something I probably should have done beforehand:
Now I feel like an idiot, and simultaneously realize the brilliance of the Groupon model - they can make interest on the float of non-cashed groupons, which is huge for services that can't be consumed instantaneously. [edit - I'm told that Groupon pays out vendors immediately on the deal, which in theory should mean the vendors can sit on the cash until redeemed]
I have to mention that a simple email to groupon and my money was quickly refunded, so I have no beef with Groupon. The experience made me wonder though: how many vendors would be prepared for the massive Groupon demand spike?
Here are some of the recent vendors/deals and quantities issued for Groupon:
Here are some of the discussions centered around the deals:
I’ve been trying to book an appointment online and it just says “No available times were found” for every date in January and February… Is that just because they are overwhelmed by the number of Groupons purchased?
They made me feel like it was my fault that they had too many people buy this (1700 people) so they just can’t handle the capacity, so they haven’t been able to get back to everyone.
I just called and found their phone number was disconnected, too.
Not all the comments were negative -- for goods/services that are typically provided in high volume, people seemed extremely happy with their Groupon experience. For the time-intensive services, the user response was a bit more spotty.
Again, my experience wasn't great, but getting a refund was no problem whatsoever. It may take a little time for consumers and vendors to flesh out the best way to manage their respective Groupon experiences, but there seems to be a real benefit from getting consumers excited about buying together on the day's deal. So I expect Groupon to be around for a while, continuing to incite mania-induced group purchasing of local goods and services.
As a side note, am I the only person who wonders why a "Collective Action Engine" is necessary to drive the Groupon site? It sounds a lot like business-speak for one line of code saying "don't issue deal until N customers have bought. Maybe I'm missing something.
]]>That horse has been beaten to death, and there is ample evidence that it does scale with a little elbow grease where necessary.
It's also commonly known that many of the rails convenience methods aren't terribly good at making complex queries.
What I've just realized, is that some of the seemingly simple operations are implemented with horrible efficiency (the current tech-lingo would be "non-performant" which makes me puke a little into my mouth when I write it).
Today I found two big performance problems:
Take the following:
You would expect that code to do a single insert for all the join entries. In reality you get something like this, repeated for each bar
:
bars_foos Columns (1.1ms) SHOW FIELDS FROM
bars_foos
SQL (0.6ms) INSERT INTObars_foos
(bar_id
,foo_id
) VALUES (100, 117200)
I hand coded sql to do the same thing, and found a 7x speedup:
This one should be a layup:
@foo.bars = []
but I was shocked to see this in my sql log:
I hand coded the trivial sql and got a 10x speedup:
These aren't difficult or complex operations, and they're extremely common.
I looked into ar-extensions, but there doesn't seem to be any support for HABTM relation creation.
I guess for now these queries should be hand-coded once tables get to a certain size, but I'm still in minor disbelief. Anyone with a better "Rails Way" to do this, please clue me in.
]]>I just had an unbelievable debugging experience with our facebook connect site:
One of our pages includes three main elements:
I had the page working fine in all sane browsers, and went to verify that everything was kosher on internet explorer. You're really not going to believe this.
I tested the page in every different way, and those steps had to be performed in exactly that order to reproduce the effect, but it consistently disabled all text areas on the page. Debugging javascript in internet explorer isn't the most enjoyable experience - the IE javascript debugger is a major downgrade from firebug. I tried using firebug lite, but the above sequence actually DISABLED THE FIREBUG CONSOLE!?!?
The Solution: <div style="overflow: hidden; width: 1px; height: 1px;">
<form>
<textarea id="ajax_fix"></textarea>
</form>
</div>
Then, I appended the following callback to FB.Connect.streamPublish (which gets called after the dialog is closed):
if ($("ajax_fix")) {$("ajax_fix").focus();}; . . . and problem solved. I still don't believe it. It's clearly a ridiculous hack. I just don't know whether to feel dirty for putting it in or triumphant for getting it to work.
I use database records for file names when generating reports, but some of those have invalid characters. The attachment_fu plugin has the sanitize_filename method, but I wasn't so happy with the output, e.g.,
>> sanitize_filename("foo & bar")
=> "foo___bar"
>> sanitize_filename("14th @bar")
=> "14th__bar"
>> sanitize_for_filename(" foo & bar ")
=> "foo_and_bar"
>> sanitize_for_filename("14th @bar")
=> "14th_at_bar"
Michael Wellman (my Ph.D. advisor) started a blog last week, and he wrote a short piece about economic efficiencies arising from high-frequency trading, which I was pointed to by Dan Reeves' blog post on messymatters.com. Mike says:
Some have suggested that rapidity of response capability per se could open up manipulation possibilities or is otherwise destabilizing. We have also seen questions about whether diverting trade surplus toward whomever builds the biggest fastest network is an efficient use of resources, and the implications for perceptions of fairness across the trading public.
He boils the problem down to the support of continuous trading:
The root of the problem, in my view, is the system's support for continuous-time trading. In a continuous market, trades are executed instantaneously whenever there are matching orders, and introduction of an unmatched order likewise causes an instantaneous update to the information available to traders.
The solution he proposes is the use of periodic clearing in equity markets, i.e., call markets:
An alternative would be a discrete-time market mechanism (technically, a call market), where orders are received continuously but clear only at periodic intervals. The interval could be quite short--say, one second--or aggregate over longer times--five or ten seconds, or a minute. Orders accumulate over the interval, with no information about the order book available to any trading party. At the end of the period, the market clears at a uniform price, traders are notified, and the clearing price becomes public knowledge. Unmatched orders may expire or be retained at the discretion of the submitting traders.
Since my dissertation topic was "Multiattribute Call Markets" (pdf), I feel a little satisfaction is possibly steering his focus back to this alternative market design (he had done a good amount of research on call markets prior to my becoming his student). Having spent my grad school years studying call markets, I'd like to add to the discussion with a short example of how inefficiencies may arise from continuous-time trading.
Assume we have three traders, Joe Sixpack, Jenny Soccermom, and J.P. Goldman. Joe has 1 share of thinly-traded InterWeb stock that he's looking to sell, and Jenny is looking to buy one share of InterWeb. Joe is willing to sell for anything over $1, while Jenny is willing to pay as much as $20. Joe and Jenny aren't savvy traders, so they just submit their reserve prices as bids. J.P. will ultimately make an arbitrage trade, submitting a buy offer for $2 and a sell offer for $19.
With continuous-time trading, J.P. monitors the stock quotes on a microsecond time scale, sees Joe's sell offer, and snatches it up at $2 with confidence that he can re-sell it in the near future. When Jenny comes along, J.P. flips the stock to her at a price of $19. In the end, Joe and Jenny have both successfully executed trades, but had to pay close to their reserve prices to execute. J.P. has effectively sucked $18 out of the system, in return for providing "trade liquidity".
With a call market, buy and sell offers are collected over a short duration of time (e.g, 1 second). In our example, all four offers will be considered together when the market is cleared (buy from Jenny, sell from Joe, and buy/sell from J.P.). When trades are ultimately executed, Jenny trades directly with Joe, since their offers represent the strongest buy and sell offers, respectively. J.P doesn't trade, since his offers don't add value to the ultimate allocation.
In the latter scenario, the call market has effectively replaced the "liquidity value" provided by J.P., providing inherent liquidity through a time-based aggregation of trades. In a later post, Mike related this liquidity provision to a "short-lived dark pool", an idea put forth by Felix Selmon.
I would expect a lot of pushback from those trading shops that have already invested heavily in continuous trading platforms, so don't expect to see call markets deployed anytime soon without a strong grass-roots effort from the academic community.
@codinghorror posted "trying to formulate an algorithm for collapsing a long URL legibly." This sounded like a fun distraction from a semi-painful monday, and the first thing that occurred to me was using letter frequencies to prune urls down. I couldn't resist coding it up to see how well it would work.
First, here's the code:
So, for example:
not bad for a first cut
If you're sending non-transactional email, you should be handling unsubscribe requests. So let's assume you have a database table of unsubscribed emails, and want to check this table before sending anyone an email. In RoR, the quick and dirty solution is to check for unsubscribe status before calling any deliver_action method in your mailer classes:
A better solution is to modify the ActionMailer::Base class with with a new recipients method, so that your mailer classes can just call your modified method. Here's what I did:
So now in my mailer class, I just call:
instead of
and anytime I attempt a delivery, unsubscribed addresses will be automatically removed from the recipient list.
]]>In building a sitemap for my rails app, I went off of this blog post:
http://tonycode.com/wiki/index.php?title=Ruby_on_Rails_Sitemap_Generator
but I have public-facing pages for which no ActiveRecord model exists.
I added the following to my sitemap controller to add entries for a given
controller:
Then, for example, in my controller I just call:
and finally, in my view:
oh, and in my sitemap_helper:
I use mongrel_cluster to manage my rails app settings, and handle the actual start/stop tasks via monit. It would be asking for trouble to hard-code the rails environment in my monitrc file, so I modified listen.rb in the workling plugin to take a config file argument.
With that done, i just needed to modify my monitrc file to start my working process with the path to my mongrel_cluster config file.
It seems like there should be a better way to handle this, but the documentation on workling is almost non-existent.
workling.rb edits:
and then my monit start command for workling: