tag:kevinlochner.com,2013:/posts Tech, Ramblings, and Intellectual Detritus 2016-01-10T06:13:48Z tag:kevinlochner.com,2013:Post/426013 2012-02-22T18:33:00Z 2013-10-08T16:53:01Z Open Source Payoffs

My edit was really minor, but seeing this still makes me happy:


]]>
tag:kevinlochner.com,2013:Post/426018 2012-02-04T01:28:00Z 2013-10-08T16:53:01Z Joyful Coding

From the redis manifesto:

"We optimize for joy. We believe writing code is a lot of hard work, and the only way it can be worth is by enjoying it. When there is no longer joy in writing code, the best thing to do is stop. To prevent this, we'll avoid taking paths that will make Redis less of a joy to develop."

Amen.

]]>
tag:kevinlochner.com,2013:Post/426022 2011-08-08T08:52:00Z 2013-10-08T16:53:01Z A developer is only as good as his commits

As a developer I've found new appreciation for the adage "a man* is only as good as his word."  In business and in life, you can't depend on people who don't follow through on promises, especially when stakes are high.

Amongst programmers, our code is our word.   If collaborators can't trust that our code is of high quality, we are effectively a time and energy sink for those around us, forcing on others the task of testing and validating our work.  The analogue in the non-tech world would be someone whose promises require second guessing, backup plans, and a non-trivial dose of anxiety.

A developer that pushes high-quality commits, like someone who follows through on his or her word, is a refreshing person to interact with, because the cost of interaction is negligible, making the net value of all contributions significantly higher.

Push good code.

*note:  excuse the gender bias in the quote - fortunately women are better with promises and code commits

]]>
tag:kevinlochner.com,2013:Post/426027 2011-07-16T06:22:00Z 2013-10-08T16:53:01Z Physical Technical Problems

A case study of the physical world bleeding into the technical . . . and a case study in idiotic StackExchange questions.

 

]]>
tag:kevinlochner.com,2013:Post/426028 2011-05-14T01:13:00Z 2013-10-08T16:53:01Z Canned interview problems are fundamentally flawed

I always hated "programming problems" when I was interviewing for jobs. I now also hate them as someone who is trying to hire programmers.

As a candidate, I felt like these kinds of problems had little relationship to real work. How often do I really need to find the number of ways that one could walk down a set of stairs (a classic recursion interview question) or what floor an egg breaks on? (nice one, google)  And what am I supposed to do if I already know the answer? (which was often)  There are multiple levels of weirdness at play -- not exactly what I call enjoyable.

As an employer, I hate them for an entirely different reason. My objective during an interview is to find out two things:

a) is this person smart
b) can this person work well with our team

Canned interview problems may given you the answer to (a), but they do so at the cost of having any chance of knowing (b).

By giving them a contrived problem that I know the answer to (and they know I know the answer), I've immediately forced an artificial interrogation-style power dynamic between us and likely amplified their anxiety by a couple orders of magnitude. My goal is to get as far away from that dynamic as possible.

To the best of my ability, I want to simulate an hour or two of work and see whether we can collaborate effectively.  Here's the rough approach I've worked out so far, comments are appreciated as I'm pretty new at this myself:

  • Have them share a personal project with you, it doesn't have to be of epic proportion, just something they're working on.
  • Talk through some of their code during the interview if possible. This can give you both an idea of what a code review will look like.
  • Have them explain any problems they are having, and see if you can help. This informs whether you will be able to collaborate on projects they own. 
  • Walk through some of your own code and explain some of the problems you've been having. See if they have any insight and whether you can brainstorm together.

In both cases of reviewing your respective work, you are admitting that you don't know the answer to the problem. In the case of your own problem, you're also admitting that it's something you could use help with.  You have now mitigated the power dynamic somewhat, which means the candidate can feel comfortable in throwing out ideas. You've moved from waiting for the "right" solution to asking for ideas on a real problem.

You come away with a good indication of their ability, as well as a decent indication for how well you would work together. And a possible side benefit may be new insight into the problems that you're struggling with.  It takes a little extra courage to fly without a net, but that's the whole point - leveling the field so you can actually talk.

If this sounds sane to you and you're looking for dev work in San Francisco, please get in touch.

]]>
tag:kevinlochner.com,2013:Post/426035 2010-12-13T22:52:00Z 2013-10-08T16:53:01Z Safe Access for Temporary ActiveRecord Fields in Rails

I run into a fairly common problem in rails views when I'm pulling data from multiple tables.  To optmize on db access, I use :select to pre-load the data from the joined tables.  For example, let's say I want to list all users with their respective cities:

Then I don't have to load the Address model to display a user's city:

The problem is that my user partial is now coupled with my named scope:  if I render the partial without using the scope, I'll throw an exception.  To avoid the problem, I add an accessor method to the User model that will pull the pre-loaded data if available, and revert to using the Address model:

The partial will be inefficient, but I'd prefer to find the error when profiling view times rather than through exceptions.

]]>
tag:kevinlochner.com,2013:Post/426036 2010-11-04T17:35:04Z 2013-10-08T16:53:01Z How Twitter handles disabled Javascript The home page works, but once you log in . . .

. . . so much for Javascript fallback.

]]>
tag:kevinlochner.com,2013:Post/426040 2010-10-01T19:23:00Z 2013-10-08T16:53:02Z Bash-style history for irb

This functionality is included in some ruby gems (Utility Belt seems to be the most popular), but I'm more of a "roll-your-own" kinda guy.  Not to mention that the gem is a little overkill for printing out command history. 

Here's what I came up with:

]]>
tag:kevinlochner.com,2013:Post/426041 2010-09-23T23:02:00Z 2013-10-08T16:53:02Z Prompting for Facebook Permissions

When authenticating users via facebook, you have a laundry list of possible permissions to ask for.  If you ask for no permissions, the user gets the following login window:


Any requested permissions get displayed in the login window, but some of them get grouped into the same permission category. For example, many of the user_foo privileges get grouped into "Access my profile information", and similarly, friends_foo permissions get grouped into "Access my friends' information".

Below is a screenshot of what it looks like when asking for the full suite of permissions (I used the helpful Rell application to test). From this you can get an idea of what it looks like to the user based on which permissions you're asking for.

Asking for all permissions at once would be a little intimidating for users . . .

]]>
tag:kevinlochner.com,2013:Post/426042 2010-09-23T16:06:00Z 2013-10-08T16:53:02Z Orkut update - officially a "new product"

Confirming the email alert I recently received, Google appears to be strategerizing in social with a renewed push for Orkut: it's listed on page 1 of their new products page (despite launching in 2004).

]]>
tag:kevinlochner.com,2013:Post/426046 2010-09-23T02:21:00Z 2013-10-08T16:53:02Z This is what heaven looks like

My browsing speed was noticeably & absurdly fast, so I ran a quick check:

]]>
tag:kevinlochner.com,2013:Post/426048 2010-09-19T02:32:00Z 2013-10-08T16:53:02Z Sign of a well played scrabble game ]]> tag:kevinlochner.com,2013:Post/426016 2010-08-16T15:33:00Z 2015-06-02T16:41:18Z Is Google Bringing Orkut Back?

My old friend Orkut sent me an email yesterday, the first in a long time . . .

]]>
tag:kevinlochner.com,2013:Post/426019 2010-08-06T23:01:00Z 2016-01-10T06:13:48Z Use rails date_select without an activerecord model

I actually googled this and found a workable but ugly solution:

 
 ## view code
<%= date_select('range', 'start_date', :order => [:month, :day, :year])%>


## controller code
@start_date = Date.civil(params[:range][:"start_date(1i)"].to_i,params[:range][:"start_date(2i)"].to_i,params[:range][:"start_date(3i)"].to_i) 

I needed to include the hour and minute as well, and didn't want to cram more arguments into the Date.civil call (actually Time.zone.local), so I cleaned up the code a bit.

Hoping to leave the internet a little better for the next guy, I thought I'd post the code.

 
 ## view code
<%= datetime_select('range_start', 'date', :order => [:month, :day, :year, :hour,:minute]) %>


# controller code
@start_date = Time.zone.local(*params[:range_start].sort.map(&:last).map(&:to_i)) 
]]>
tag:kevinlochner.com,2013:Post/426023 2010-07-02T01:32:30Z 2013-10-08T16:53:01Z Feynman on Knowing vs. Knowing the Name ]]> tag:kevinlochner.com,2013:Post/426024 2010-05-30T16:44:00Z 2013-10-08T16:53:01Z How I Roll

credit:  http://bigeyedeer.wordpress.com/2008/07/15/this-cartoon-wrote-a-sweary-word-o...

]]>
tag:kevinlochner.com,2013:Post/426025 2010-05-02T21:53:00Z 2013-10-08T16:53:01Z Crappy TV Rant: Happy Town

I thought Happy Town had a decent chance of being entertaining

  "Executive producer Josh Appelbaum and others on the show are huge fans of the Twin Peaks"

  "Executive producer Scott Rosenberg says he's more a Stephen King fan."

  "So if you think it's too much like Twin Peaks, blame them. If you think it's not enough like Twin Peaks, blame me."

I love David Lynch films - Blue Velvet, Lost Highway, Mulholland Drive, and of course, Twin Peaks, are all surrealist classics.  He is a true artist in the film world, where you come away from his movies not with a literal understanding of the story, but with a visceral sense of being disturbed and intrigued.  His short films are not to be missed if you're a fan.
I also enjoy Stephen King stories.   I say stories because it's typically not the production that wins me over - the novels are uniformly better than the movies - The Shining, Misery, Christine, Kujo, - all great books and very decent movies.  His knack for writing terrifying stories that get at our deepest fears elevate him above other writers of the genre.

So with my expectations possibly set too high, Happy Town was fully disappointing - formulaic writing, overly produced, and all-around cheesy.  I want to know who to blame if it's too much like CSI.  It's unlikely I'll be back for a second episode, 20 minutes was more than enough.

 

]]>
tag:kevinlochner.com,2013:Post/426029 2010-04-21T22:40:57Z 2013-10-08T16:53:01Z Facebook Connect Fail Flash? Really? Funny given that Pandora was featured in the f8 keynote.

]]>
tag:kevinlochner.com,2013:Post/426031 2010-03-03T20:03:45Z 2013-10-08T16:53:01Z (not) Powered by Scala ]]> tag:kevinlochner.com,2013:Post/426032 2010-03-03T18:35:00Z 2013-10-08T16:53:01Z The New Dashboard for Facebook Platform

Facebook has decided to do away with notifications - the little messages notifying you of things friends did on platform applications, or giving you updates on applications you have installed:

While some of these messages are more nuisance than value, users have the ability to block notifications from specific applications, and many applications depend on these notifications to enable continuity of user experience (chess is a good example - knowing when to play).

The Dashboard

Facebook decided that these notifications were too intrusive, or something like that, so they decided to replace them with something called the "Dashboard":   
The dashboard functions as a place to see notifications, grouped by application and limited to the most recent three displayed in the summary.  
This is fine.  Hell, it could even be considered an improvement.  

Getting Alpha and Beta Confused

But the problem is that Facebook nuked notifications while the Dashboard feature was still in alpha.  On launch day for the new feature, the Dashboard wasn't operational for many applications.  It was irresponsible of Facebook to annihilate notifications with the replacement feature in alpha, especially given that many applications critically depend on light-touch messaging in providing a decent user experience.

XML isn't Supposed to be Pretty, but WTF?

I'll sign off with a little sample of what's coming out of Facebook these days, an XML response to a Dashboard API call.  

Finding the data in the response is like playing "where's waldo?" with XML  (hint:   "Message Content", "http://url.net", "Text Content", "TimeStamp", "MessageID"). 

I also like how Facebook devs use "elt" repetitions to indicate nesting depth, I can just hear the conversation now:
    "did you mean the  response_elt or the response_elt_elt?"
     . . . "dammit, I said the response_elt_elt_elt"
<Dashboard_getNews_response list="true">
    <Dashboard_getNews_response_elt key="312266072840" list="true">
        <Dashboard_getNews_response_elt_elt key="image" />
        <Dashboard_getNews_response_elt_elt key="news" list="true">
            <Dashboard_getNews_response_elt_elt_elt list="true">
                <Dashboard_getNews_response_elt_elt_elt_elt key="message">
                      Message Content
                </Dashboard_getNews_response_elt_elt_elt_elt>
                <Dashboard_getNews_response_elt_elt_elt_elt key="action_link" list="true">
                    <Dashboard_getNews_response_elt_elt_elt_elt_elt key="href">
                        http://url.net
                    </Dashboard_getNews_response_elt_elt_elt_elt_elt>
                    <Dashboard_getNews_response_elt_elt_elt_elt_elt key="text">
                         Text Content goes here
                    </Dashboard_getNews_response_elt_elt_elt_elt_elt>
                </Dashboard_getNews_response_elt_elt_elt_elt>
            </Dashboard_getNews_response_elt_elt_elt>
        </Dashboard_getNews_response_elt_elt>
        <Dashboard_getNews_response_elt_elt key="time">
              TimeStamp
        </Dashboard_getNews_response_elt_elt>
        <Dashboard_getNews_response_elt_elt key="fbid">
              MessageID
        </Dashboard_getNews_response_elt_elt>
    </Dashboard_getNews_response_elt>
</Dashboard_getNews_response>
]]>
tag:kevinlochner.com,2013:Post/426033 2010-02-21T20:16:00Z 2013-10-08T16:53:01Z My Brief Stint as a Groupon User

learning again that you get what you pay for

Groupon

For those that are unfamiliar, Groupon is a collective buying site, similar to Woot and BuyWithMe.  Borrowing from the Wikipedia entry:

The Groupon works as an assurance contract using ThePoint's platform: if a certain number of people sign up for the offer, then the deal becomes available to all;  if the predetermined minimum is not met, no one gets the deal that day. This reduces risk for retailers, who can treat the coupons as quantity discounts as well as sales tools. Groupon makes money by getting a cut of the deal from the retailers

Sounds great, right?  Retailers build sales by offering a one-time discount, contingent on quanitty, and consumers get a good deal on some product or service, with Groupon taking a little cut.

Well, as with anything with even trivial complexity, the devil is in the details. 

My Experience

I was overdue for a dental exam, and my insurance doesn't exactly have stellar dental coverage, and i happened to be checking out Groupon when the deal of the day was for $60 exam/cleaning/xrays at a San Francisco dental office.   

Great! Right?

I bought into the deal, and actually waited about a month before booking my appointment.  Well, before trying to book my appointment.  When I called the office, I was greated not by the receptionist, but by this pre-recorded message:

The following information applies only to our Groupon.com patients that have not yet scheduled with us but who have already called or emailed our office.  We are currently repsonding to Groupon.com user calls and emails in the order in which they were received.  If you have already called or emailed to schedule your Groupon.com appointment, we ask that you be patient and please not call or email us again.

That's when I decided to run the numbers, something I probably should have done beforehand:

  • 585 Groupons issued 
  • 10 new patients/week (estimate)
  • 58 weeks before I get an appointment

Now I feel like an idiot, and simultaneously realize the brilliance of the Groupon model - they can make interest on the float of non-cashed groupons, which is huge for services that can't be consumed instantaneously. [edit - I'm told that Groupon pays out vendors immediately on the deal, which in theory should mean the vendors can sit on the cash until redeemed]

I have to mention that a simple email to groupon and my money was quickly refunded, so I have no beef with Groupon.  The experience made me wonder though: how many vendors would be prepared for the massive Groupon demand spike?

A Quick Look at Some Recent Deals

Here are some of the recent vendors/deals and quantities issued for Groupon:

  • Indian Restaurant Discount - 2549
  • Admission to a Party - 1650
  • Food Festival Entrance - 216
  • Spa Session - 1734
  • Asian Restaurant Discount - 2111
  • Salon Haircut - 833
  • Massage - 998

Here are some of the discussions centered around the deals:

I’ve been trying to book an appointment online and it just says “No available times were found” for every date in January and February… Is that just because they are overwhelmed by the number of Groupons purchased?

They made me feel like it was my fault that they had too many people buy this (1700 people) so they just can’t handle the capacity, so they haven’t been able to get back to everyone.

I just called and found their phone number was disconnected, too.

Not all the comments were negative -- for goods/services that are typically provided in high volume, people seemed extremely happy with their Groupon experience.  For the time-intensive services, the user response was a bit more spotty. 

Concluding Notes

Again, my experience wasn't great, but getting a refund was no problem whatsoever.  It may take a little time for consumers and vendors to flesh out the best way to manage their respective Groupon experiences, but there seems to be a real benefit from getting consumers excited about buying together on the day's deal.  So I expect Groupon to be around for a while, continuing to incite mania-induced group purchasing of local goods and services. 

As a side note, am I the only person who wonders why a "Collective Action Engine" is necessary to drive the Groupon site?  It sounds a lot like business-speak for one line of code saying "don't issue deal until N customers have bought.  Maybe I'm missing something.

]]>
tag:kevinlochner.com,2013:Post/426037 2010-02-05T00:47:00Z 2013-10-08T16:53:01Z Rails: Growing Pains With ActiveRecord

Rails, does it scale? 

That horse has been beaten to death, and there is ample evidence that it does scale with a little elbow grease where necessary.  

It's also commonly known that many of the rails convenience methods aren't terribly good at making complex queries.

What I've just realized, is that some of the seemingly simple operations are implemented with horrible efficiency (the current tech-lingo would be "non-performant" which makes me puke a little into my mouth when I write it). 

Today I found two big performance problems:

HABTM relation creation

Take the following:

You would expect that code to do a single insert for all the join entries.  In reality you get something like this, repeated for each bar:

bars_foos Columns (1.1ms) SHOW FIELDS FROM bars_foos SQL (0.6ms) INSERT INTO bars_foos (bar_id, foo_id) VALUES (100, 117200)

I hand coded sql to do the same thing, and found a 7x speedup:

 

HABTM relation deletion

This one should be a layup:

@foo.bars = []

but I was shocked to see this in my sql log:

 

I hand coded the trivial sql and got a 10x speedup:

 

In Conclusion

These aren't difficult or complex operations, and they're extremely common.

I looked into ar-extensions, but there doesn't seem to be any support for HABTM relation creation.

I guess for now these queries should be hand-coded once tables get to a certain size, but I'm still in minor disbelief.  Anyone with a better "Rails Way" to do this, please clue me in.

]]>
tag:kevinlochner.com,2013:Post/426039 2009-10-08T04:50:00Z 2013-10-08T16:53:01Z recipe for disaster: Facebook Connect + AJAX + Internet Explorer

I just had an unbelievable debugging experience with our facebook connect site:

One of our pages includes three main elements:
  • a link to post to your facebook stream
  • an link to update part of the page with AJAX 
  • an input form


I had the page working fine in all sane browsers, and went to verify that everything was kosher on internet explorer. You're really not going to believe this.

The Problem:

The following sequence consistently disabled all the textarea fields of my form:
  1. open the facebook "post to your stream" dialog box
  2. post the message (or close the box, didn't matter) 
  3. click the AJAX link

I tested the page in every different way, and those steps had to be performed in exactly that order to reproduce the effect, but it consistently disabled all text areas on the page.

Debugging javascript in internet explorer isn't the most enjoyable experience - the IE javascript debugger is a major downgrade from firebug. I tried using firebug lite, but the above sequence actually DISABLED THE FIREBUG CONSOLE!?!?

The Solution:

The hack I worked out to fix this mess is more unbelievable than the problem.

I noticed that if I clicked on one of the input fields after opening the facebook dialog box, the form fields wouldn't be disabled after subsequently making the ajax call. It really didn't matter which form field i clicked, as long as it happened between closing the facebook dialog and making the ajax call. So I stuck a hidden form in the top of the page:

  <div style="overflow: hidden; width: 1px; height: 1px;">
      <form>
          <textarea id="ajax_fix"></textarea>
      </form>
  </div>


Then, I appended the following callback to FB.Connect.streamPublish (which gets called after the dialog is closed):

if ($("ajax_fix")) {$("ajax_fix").focus();};

. . . and problem solved. I still don't believe it.

It's clearly a ridiculous hack. I just don't know whether to feel dirty for putting it in or triumphant for getting it to work.

]]>
tag:kevinlochner.com,2013:Post/426043 2009-09-23T17:44:04Z 2013-10-08T16:53:02Z Work Party A professor of mine once told a story about another hardware engineering professor, showing up for work friday afternoon with a cooler in tote and a big smile on his face, saying "My wife and kids are out of town, I can work *all* weekend."

Coming from the perspective of an unattached grad student who could work all hours of the day/night, that statement seemed a little on the absurd side. Now that I'm living with my girlfriend of 4 years who prefers that I join her for dinner and keep fairly sane sleeping hours, I can appreciate the sentiment.

My girlfriend just flew out of town through friday, and I have a case of diet coke and a big smile on my face.]]>
tag:kevinlochner.com,2013:Post/426044 2009-08-07T00:29:00Z 2013-10-08T16:53:02Z Sanitizing Names for Files

I use database records for file names when generating reports, but some of those have invalid characters. The attachment_fu plugin has the sanitize_filename method, but I wasn't so happy with the output, e.g.,

 
 >> sanitize_filename("foo & bar") 
=> "foo___bar" 
 >> sanitize_filename("14th @bar") 
=> "14th__bar" 

 
So I wrote a prettier sanitization helper:

 
which gives me:
 
 >> sanitize_for_filename(" foo & bar ") 
=> "foo_and_bar" 
 >> sanitize_for_filename("14th @bar") 
=> "14th_at_bar" 

 
Much better.]]>
tag:kevinlochner.com,2013:Post/426045 2009-08-03T23:57:00Z 2013-10-08T16:53:02Z Call Markets, Efficiency, and High-Frequency Trading

Michael Wellman (my Ph.D. advisor) started a blog last week, and he wrote a short piece about economic efficiencies arising from high-frequency trading, which I was pointed to by Dan Reeves' blog post on messymatters.com. Mike says:

Some have suggested that rapidity of response capability per se could open up manipulation possibilities or is otherwise destabilizing. We have also seen questions about whether diverting trade surplus toward whomever builds the biggest fastest network is an efficient use of resources, and the implications for perceptions of fairness across the trading public.
 
He boils the problem down to the support of continuous trading:
 
The root of the problem, in my view, is the system's support for continuous-time trading. In a continuous market, trades are executed instantaneously whenever there are matching orders, and introduction of an unmatched order likewise causes an instantaneous update to the information available to traders.
 
The solution he proposes is the use of periodic clearing in equity markets, i.e., call markets:
 
An alternative would be a discrete-time market mechanism (technically, a call market), where orders are received continuously but clear only at periodic intervals. The interval could be quite short--say, one second--or aggregate over longer times--five or ten seconds, or a minute. Orders accumulate over the interval, with no information about the order book available to any trading party. At the end of the period, the market clears at a uniform price, traders are notified, and the clearing price becomes public knowledge. Unmatched orders may expire or be retained at the discretion of the submitting traders.
 
Since my dissertation topic was "Multiattribute Call Markets" (pdf), I feel a little satisfaction is possibly steering his focus back to this alternative market design (he had done a good amount of research on call markets prior to my becoming his student). Having spent my grad school years studying call markets, I'd like to add to the discussion with a short example of how inefficiencies may arise from continuous-time trading.
 
Assume we have three traders, Joe Sixpack, Jenny Soccermom, and J.P. Goldman. Joe has 1 share of thinly-traded InterWeb stock that he's looking to sell, and Jenny is looking to buy one share of InterWeb. Joe is willing to sell for anything over $1, while Jenny is willing to pay as much as $20. Joe and Jenny aren't savvy traders, so they just submit their reserve prices as bids. J.P. will ultimately make an arbitrage trade, submitting a buy offer for $2 and a sell offer for $19.
 
With continuous-time trading, J.P. monitors the stock quotes on a microsecond time scale, sees Joe's sell offer, and snatches it up at $2 with confidence that he can re-sell it in the near future. When Jenny comes along, J.P. flips the stock to her at a price of $19. In the end, Joe and Jenny have both successfully executed trades, but had to pay close to their reserve prices to execute. J.P. has effectively sucked $18 out of the system, in return for providing "trade liquidity".
 
With a call market, buy and sell offers are collected over a short duration of time (e.g, 1 second). In our example, all four offers will be considered together when the market is cleared (buy from Jenny, sell from Joe, and buy/sell from J.P.). When trades are ultimately executed, Jenny trades directly with Joe, since their offers represent the strongest buy and sell offers, respectively. J.P doesn't trade, since his offers don't add value to the ultimate allocation.
 
In the latter scenario, the call market has effectively replaced the "liquidity value" provided by J.P., providing inherent liquidity through a time-based aggregation of trades. In a later post, Mike related this liquidity provision to a "short-lived dark pool",  an idea put forth by Felix Selmon.
 
I would expect a lot of pushback from those trading shops that have already invested heavily in continuous trading platforms, so don't expect to see call markets deployed anytime soon without a strong grass-roots effort from the academic community.

]]>
tag:kevinlochner.com,2013:Post/426047 2009-04-27T23:45:00Z 2013-10-08T16:53:02Z shortening urls

@codinghorror posted "trying to formulate an algorithm for collapsing a long URL legibly." This sounded like a fun distraction from a semi-painful monday, and the first thing that occurred to me was using letter frequencies to prune urls down. I couldn't resist coding it up to see how well it would work.
 
First, here's the code:

 
So, for example:

 
>> url = "http://my.com/makethis/longish/url/shorter"
 >> host = "http://my.com/"
 >> shorten_url(url,host,7)
=> "http://my.com/makthis/longish/url/shorter"
 >> shorten_url(url,host,6)
=> "http://my.com/makhis/lngish/url/shortr"
 >> shorten_url(url,host,5)
=> "http://my.com/mkhis/lngsh/url/shorr"
 >> shorten_url(url,host,4)
=> "http://my.com/mkhs/lgsh/url/shrr"
 >> shorten_url(url,host,3)
=> "http://my.com/mks/lgh/url/srr"
 


not bad for a first cut

]]>
tag:kevinlochner.com,2013:Post/426005 2009-04-09T19:23:00Z 2013-10-08T16:53:01Z Handling Unsubscribe Requests in Ruby On Rails With ActionMailer

If you're sending non-transactional email, you should be handling unsubscribe requests. So let's assume you have a database table of unsubscribed emails, and want to check this table before sending anyone an email. In RoR, the quick and dirty solution is to check for unsubscribe status before calling any deliver_action method in your mailer classes:


MyMailer.deliver_action(address) unless unsubscribed(address)

A better solution is to modify the ActionMailer::Base class with with a new recipients method, so that your mailer classes can just call your modified method. Here's what I did:

So now in my mailer class, I just call:


valid_recipients addr

instead of


recipients addr

and anytime I attempt a delivery, unsubscribed addresses will be automatically removed from the recipient list.

]]>
tag:kevinlochner.com,2013:Post/426009 2009-04-03T17:13:00Z 2013-10-08T16:53:01Z rails sitemap entries for pages without models

In building a sitemap for my rails app, I went off of this blog post:
 
http://tonycode.com/wiki/index.php?title=Ruby_on_Rails_Sitemap_Generator
 
but I have public-facing pages for which no ActiveRecord model exists.
 
I  added the following to my sitemap controller to add entries for a given
controller:

Then, for example, in my controller I just call:

@info_pages = get_public_action_urls(InformationPagesController)

 

and finally, in my view:

@info_pages.each do |entry|
  xml.url do
  xml.loc entry
  xml.lastmod w3c_date(Time.now)
  xml.changefreq "weekly"
  xml.priority 0.9
  end
end

oh, and in my sitemap_helper:

def w3c_date(date)
     date.utc.strftime("%Y-%m-%dT%H:%M:%S+00:00")
end

 

]]>
tag:kevinlochner.com,2013:Post/426011 2009-03-17T22:16:00Z 2013-10-08T16:53:01Z rails environments with workling

I use mongrel_cluster to manage my rails app settings, and handle the actual start/stop tasks via monit. It would be asking for trouble to hard-code the rails environment in my monitrc file, so I modified listen.rb in the workling plugin to take a config file argument.
 
With that done, i just needed to modify my monitrc file to start my working process with the path to my mongrel_cluster config file.
 
It seems like there should be a better way to handle this, but the documentation on workling is almost non-existent.
 
workling.rb edits:



and then my monit start command for workling:

 
start program = "/path/to/workling_client start -- -c /path/to/mongrel_cluster.yml"
]]>