My Brief Stint as a Groupon User

learning again that you get what you pay for

Groupon

For those that are unfamiliar, Groupon is a collective buying site, similar to Woot and BuyWithMe.  Borrowing from the Wikipedia entry:

The Groupon works as an assurance contract using ThePoint's platform: if a certain number of people sign up for the offer, then the deal becomes available to all;  if the predetermined minimum is not met, no one gets the deal that day. This reduces risk for retailers, who can treat the coupons as quantity discounts as well as sales tools. Groupon makes money by getting a cut of the deal from the retailers

Sounds great, right?  Retailers build sales by offering a one-time discount, contingent on quanitty, and consumers get a good deal on some product or service, with Groupon taking a little cut.

Well, as with anything with even trivial complexity, the devil is in the details. 

My Experience

I was overdue for a dental exam, and my insurance doesn't exactly have stellar dental coverage, and i happened to be checking out Groupon when the deal of the day was for $60 exam/cleaning/xrays at a San Francisco dental office.   

Great! Right?

I bought into the deal, and actually waited about a month before booking my appointment.  Well, before trying to book my appointment.  When I called the office, I was greated not by the receptionist, but by this pre-recorded message:

The following information applies only to our Groupon.com patients that have not yet scheduled with us but who have already called or emailed our office.  We are currently repsonding to Groupon.com user calls and emails in the order in which they were received.  If you have already called or emailed to schedule your Groupon.com appointment, we ask that you be patient and please not call or email us again.

That's when I decided to run the numbers, something I probably should have done beforehand:

  • 585 Groupons issued 
  • 10 new patients/week (estimate)
  • 58 weeks before I get an appointment

Now I feel like an idiot, and simultaneously realize the brilliance of the Groupon model - they can make interest on the float of non-cashed groupons, which is huge for services that can't be consumed instantaneously. [edit - I'm told that Groupon pays out vendors immediately on the deal, which in theory should mean the vendors can sit on the cash until redeemed]

I have to mention that a simple email to groupon and my money was quickly refunded, so I have no beef with Groupon.  The experience made me wonder though: how many vendors would be prepared for the massive Groupon demand spike?

A Quick Look at Some Recent Deals

Here are some of the recent vendors/deals and quantities issued for Groupon:

  • Indian Restaurant Discount - 2549
  • Admission to a Party - 1650
  • Food Festival Entrance - 216
  • Spa Session - 1734
  • Asian Restaurant Discount - 2111
  • Salon Haircut - 833
  • Massage - 998

Here are some of the discussions centered around the deals:

I’ve been trying to book an appointment online and it just says “No available times were found” for every date in January and February… Is that just because they are overwhelmed by the number of Groupons purchased?

They made me feel like it was my fault that they had too many people buy this (1700 people) so they just can’t handle the capacity, so they haven’t been able to get back to everyone.

I just called and found their phone number was disconnected, too.

Not all the comments were negative -- for goods/services that are typically provided in high volume, people seemed extremely happy with their Groupon experience.  For the time-intensive services, the user response was a bit more spotty. 

Concluding Notes

Again, my experience wasn't great, but getting a refund was no problem whatsoever.  It may take a little time for consumers and vendors to flesh out the best way to manage their respective Groupon experiences, but there seems to be a real benefit from getting consumers excited about buying together on the day's deal.  So I expect Groupon to be around for a while, continuing to incite mania-induced group purchasing of local goods and services. 

As a side note, am I the only person who wonders why a "Collective Action Engine" is necessary to drive the Groupon site?  It sounds a lot like business-speak for one line of code saying "don't issue deal until N customers have bought.  Maybe I'm missing something.

Rails: Growing Pains With ActiveRecord

Rails, does it scale? 

That horse has been beaten to death, and there is ample evidence that it does scale with a little elbow grease where necessary.  

It's also commonly known that many of the rails convenience methods aren't terribly good at making complex queries.

What I've just realized, is that some of the seemingly simple operations are implemented with horrible efficiency (the current tech-lingo would be "non-performant" which makes me puke a little into my mouth when I write it). 

Today I found two big performance problems:

HABTM relation creation

Take the following:

You would expect that code to do a single insert for all the join entries.  In reality you get something like this, repeated for each bar:

bars_foos Columns (1.1ms) SHOW FIELDS FROM bars_foos SQL (0.6ms) INSERT INTO bars_foos (bar_id, foo_id) VALUES (100, 117200)

I hand coded sql to do the same thing, and found a 7x speedup:

 

HABTM relation deletion

This one should be a layup:

@foo.bars = []

but I was shocked to see this in my sql log:

 

I hand coded the trivial sql and got a 10x speedup:

 

In Conclusion

These aren't difficult or complex operations, and they're extremely common.

I looked into ar-extensions, but there doesn't seem to be any support for HABTM relation creation.

I guess for now these queries should be hand-coded once tables get to a certain size, but I'm still in minor disbelief.  Anyone with a better "Rails Way" to do this, please clue me in.

recipe for disaster: Facebook Connect + AJAX + Internet Explorer

I just had an unbelievable debugging experience with our facebook connect site:

One of our pages includes three main elements:
  • a link to post to your facebook stream
  • an link to update part of the page with AJAX 
  • an input form


I had the page working fine in all sane browsers, and went to verify that everything was kosher on internet explorer. You're really not going to believe this.

The Problem:

The following sequence consistently disabled all the textarea fields of my form:
  1. open the facebook "post to your stream" dialog box
  2. post the message (or close the box, didn't matter) 
  3. click the AJAX link

I tested the page in every different way, and those steps had to be performed in exactly that order to reproduce the effect, but it consistently disabled all text areas on the page.

Debugging javascript in internet explorer isn't the most enjoyable experience - the IE javascript debugger is a major downgrade from firebug. I tried using firebug lite, but the above sequence actually DISABLED THE FIREBUG CONSOLE!?!?

The Solution:

The hack I worked out to fix this mess is more unbelievable than the problem.

I noticed that if I clicked on one of the input fields after opening the facebook dialog box, the form fields wouldn't be disabled after subsequently making the ajax call. It really didn't matter which form field i clicked, as long as it happened between closing the facebook dialog and making the ajax call. So I stuck a hidden form in the top of the page:

  <div style="overflow: hidden; width: 1px; height: 1px;">
      <form>
          <textarea id="ajax_fix"></textarea>
      </form>
  </div>


Then, I appended the following callback to FB.Connect.streamPublish (which gets called after the dialog is closed):

if ($("ajax_fix")) {$("ajax_fix").focus();};

. . . and problem solved. I still don't believe it.

It's clearly a ridiculous hack. I just don't know whether to feel dirty for putting it in or triumphant for getting it to work.

Work Party

A professor of mine once told a story about another hardware engineering professor, showing up for work friday afternoon with a cooler in tote and a big smile on his face, saying "My wife and kids are out of town, I can work *all* weekend."

Coming from the perspective of an unattached grad student who could work all hours of the day/night, that statement seemed a little on the absurd side. Now that I'm living with my girlfriend of 4 years who prefers that I join her for dinner and keep fairly sane sleeping hours, I can appreciate the sentiment.

My girlfriend just flew out of town through friday, and I have a case of diet coke and a big smile on my face.

Sanitizing Names for Files

I use database records for file names when generating reports, but some of those have invalid characters. The attachment_fu plugin has the sanitize_filename method, but I wasn't so happy with the output, e.g.,

 
 >> sanitize_filename("foo & bar") 
=> "foo___bar" 
 >> sanitize_filename("14th @bar") 
=> "14th__bar" 

 
So I wrote a prettier sanitization helper:

 
which gives me:
 
 >> sanitize_for_filename(" foo & bar ") 
=> "foo_and_bar" 
 >> sanitize_for_filename("14th @bar") 
=> "14th_at_bar" 

 
Much better.

Call Markets, Efficiency, and High-Frequency Trading

Michael Wellman (my Ph.D. advisor) started a blog last week, and he wrote a short piece about economic efficiencies arising from high-frequency trading, which I was pointed to by Dan Reeves' blog post on messymatters.com. Mike says:

Some have suggested that rapidity of response capability per se could open up manipulation possibilities or is otherwise destabilizing. We have also seen questions about whether diverting trade surplus toward whomever builds the biggest fastest network is an efficient use of resources, and the implications for perceptions of fairness across the trading public.
 
He boils the problem down to the support of continuous trading:
 
The root of the problem, in my view, is the system's support for continuous-time trading. In a continuous market, trades are executed instantaneously whenever there are matching orders, and introduction of an unmatched order likewise causes an instantaneous update to the information available to traders.
 
The solution he proposes is the use of periodic clearing in equity markets, i.e., call markets:
 
An alternative would be a discrete-time market mechanism (technically, a call market), where orders are received continuously but clear only at periodic intervals. The interval could be quite short--say, one second--or aggregate over longer times--five or ten seconds, or a minute. Orders accumulate over the interval, with no information about the order book available to any trading party. At the end of the period, the market clears at a uniform price, traders are notified, and the clearing price becomes public knowledge. Unmatched orders may expire or be retained at the discretion of the submitting traders.
 
Since my dissertation topic was "Multiattribute Call Markets" (pdf), I feel a little satisfaction is possibly steering his focus back to this alternative market design (he had done a good amount of research on call markets prior to my becoming his student). Having spent my grad school years studying call markets, I'd like to add to the discussion with a short example of how inefficiencies may arise from continuous-time trading.
 
Assume we have three traders, Joe Sixpack, Jenny Soccermom, and J.P. Goldman. Joe has 1 share of thinly-traded InterWeb stock that he's looking to sell, and Jenny is looking to buy one share of InterWeb. Joe is willing to sell for anything over $1, while Jenny is willing to pay as much as $20. Joe and Jenny aren't savvy traders, so they just submit their reserve prices as bids. J.P. will ultimately make an arbitrage trade, submitting a buy offer for $2 and a sell offer for $19.
 
With continuous-time trading, J.P. monitors the stock quotes on a microsecond time scale, sees Joe's sell offer, and snatches it up at $2 with confidence that he can re-sell it in the near future. When Jenny comes along, J.P. flips the stock to her at a price of $19. In the end, Joe and Jenny have both successfully executed trades, but had to pay close to their reserve prices to execute. J.P. has effectively sucked $18 out of the system, in return for providing "trade liquidity".
 
With a call market, buy and sell offers are collected over a short duration of time (e.g, 1 second). In our example, all four offers will be considered together when the market is cleared (buy from Jenny, sell from Joe, and buy/sell from J.P.). When trades are ultimately executed, Jenny trades directly with Joe, since their offers represent the strongest buy and sell offers, respectively. J.P doesn't trade, since his offers don't add value to the ultimate allocation.
 
In the latter scenario, the call market has effectively replaced the "liquidity value" provided by J.P., providing inherent liquidity through a time-based aggregation of trades. In a later post, Mike related this liquidity provision to a "short-lived dark pool",  an idea put forth by Felix Selmon.
 
I would expect a lot of pushback from those trading shops that have already invested heavily in continuous trading platforms, so don't expect to see call markets deployed anytime soon without a strong grass-roots effort from the academic community.

shortening urls

@codinghorror posted "trying to formulate an algorithm for collapsing a long URL legibly." This sounded like a fun distraction from a semi-painful monday, and the first thing that occurred to me was using letter frequencies to prune urls down. I couldn't resist coding it up to see how well it would work.
 
First, here's the code:

 
So, for example:

 
>> url = "http://my.com/makethis/longish/url/shorter"
 >> host = "http://my.com/"
 >> shorten_url(url,host,7)
=> "http://my.com/makthis/longish/url/shorter"
 >> shorten_url(url,host,6)
=> "http://my.com/makhis/lngish/url/shortr"
 >> shorten_url(url,host,5)
=> "http://my.com/mkhis/lngsh/url/shorr"
 >> shorten_url(url,host,4)
=> "http://my.com/mkhs/lgsh/url/shrr"
 >> shorten_url(url,host,3)
=> "http://my.com/mks/lgh/url/srr"
 


not bad for a first cut

Handling Unsubscribe Requests in Ruby On Rails With ActionMailer

If you're sending non-transactional email, you should be handling unsubscribe requests. So let's assume you have a database table of unsubscribed emails, and want to check this table before sending anyone an email. In RoR, the quick and dirty solution is to check for unsubscribe status before calling any deliver_action method in your mailer classes:


MyMailer.deliver_action(address) unless unsubscribed(address)

A better solution is to modify the ActionMailer::Base class with with a new recipients method, so that your mailer classes can just call your modified method. Here's what I did:

So now in my mailer class, I just call:


valid_recipients addr

instead of


recipients addr

and anytime I attempt a delivery, unsubscribed addresses will be automatically removed from the recipient list.

rails sitemap entries for pages without models

In building a sitemap for my rails app, I went off of this blog post:
 
http://tonycode.com/wiki/index.php?title=Ruby_on_Rails_Sitemap_Generator
 
but I have public-facing pages for which no ActiveRecord model exists.
 
I  added the following to my sitemap controller to add entries for a given
controller:

Then, for example, in my controller I just call:

@info_pages = get_public_action_urls(InformationPagesController)

 

and finally, in my view:

@info_pages.each do |entry|
  xml.url do
  xml.loc entry
  xml.lastmod w3c_date(Time.now)
  xml.changefreq "weekly"
  xml.priority 0.9
  end
end

oh, and in my sitemap_helper:

def w3c_date(date)
     date.utc.strftime("%Y-%m-%dT%H:%M:%S+00:00")
end

 

rails environments with workling

I use mongrel_cluster to manage my rails app settings, and handle the actual start/stop tasks via monit. It would be asking for trouble to hard-code the rails environment in my monitrc file, so I modified listen.rb in the workling plugin to take a config file argument.
 
With that done, i just needed to modify my monitrc file to start my working process with the path to my mongrel_cluster config file.
 
It seems like there should be a better way to handle this, but the documentation on workling is almost non-existent.
 
workling.rb edits:



and then my monit start command for workling:

 
start program = "/path/to/workling_client start -- -c /path/to/mongrel_cluster.yml"