Category Archives: Tips

Toss Your Salad Code

2 minutes, 31 seconds

After my friend over at The Worst Thing in the World (TWTITW) recently had a post about Radar charting a salad’s many axes, I thought it’d be funny to write a page that actually made the graphs. I’d been meaning to write a post about using Maxmind’s GeoIP library and data to do heat map of web activity, but this will be a good warm up. Behold the fully functional Plip Salad Tosser.

Let’s take a look at the components of the Tosser:

The Chart: I like using APIs that are easy, make me look good, and are well documented. That’s why I love Google’s Chart API. For this page I used a their radar chart and referenced which features are available at the handy chart feature matrix. A few nice touches were adding the dotted lines and the extra data point to complete the loop (is a that radar chart faux pas?).

Arrays and quick forms: Arrays are really handy. They make a form like these easy to change and you can implode them like mad for loop-less string creation of chart labels and the like. Further, you can bust out a quick loop to check for posted values or init random values for the form. WAY easier than hard coding the HTML and making edits over and over again.

Final Product: Here’s a salad radar chart made from step 3, Publish!

PHP: Here’s the top part of the page which does the actual form processing and image source concatenating.

// seed all options
$options = array(
		'Architectural',
		'Conceptual',
		'Cultural',
		'Financial',
		'Gustatory',
		'Logistical',
		'Taxonomic',
	);

// see if they want a random one
if (isset($_POST['No,_no,_wait____toss_me_a_random_salad!'])){
	unset($_POST['d']);
}

// hand POST or init
if (isset($_POST['d'])){
	// grab vals if posted, pull name into local
	// and unset name for easy imploding below
	$postedAxes = $_POST['d'];
	$name = $postedAxes['name'];
	unset($postedAxes['name']);
} else {
	// init empty chart
	foreach ($options as $axes){
		$postedAxes[$axes] = rand(1,100);
	}
	$name = "Random Chart";
}

// seed drop down vals
$i = 1;
while ($i < 101){
	$dropValsRaw[] = $i++;
}

// build up form HTML by looping through
// every axes
$formHTML = "";
foreach ($options as $axes){
	$formHTML .= ' ';
	$formHTML .= '';

	// loop through evey axes value and check for
	// posted value so folks can easily tweak graphs
	$i = 1;
	while ($i < 101){
		if ($i == $postedAxes[$axes]) {
			$formHTML .= "$i";
		} else {
			$formHTML .= "$i";
		}
		$i++;
	}
	$formHTML .= $dropValsHTML;
	$formHTML .= '';
	$formHTML .= '
';
}

// concat final img src and use
// implode along the way where needed
$imgSrc  = "http://chart.apis.google.com/chart?";
$imgSrc .= "cht=r&chs=500x400&";
$imgSrc .= "chd=t:".implode(",",$postedAxes).",".array_shift($postedAxes)."&";
$imgSrc .= "chco=669933&";
$imgSrc .= "chls=2.0,4.0,0.0&";
$imgSrc .= "chxt=x&";
$imgSrc .= "chxl=0:|".implode("|",$options)."&";
$imgSrc .= "chxr=0,0.0,360.0&";
$imgSrc .= "chm=B,66993380,0,1.0,5.0&";
$imgSrc .= "chg=25.0,25.0,4.0,4.0&";
$imgSrc .= "chtt=$name"; 

HTML: Below the PHP I have some descriptive text and a few headers and then I output the fruit of our lil’ scripts labor:

Salad Maker

My friend over at The Worst Thing in the World (TWTITW) recently had a post about the axes of salad complexity. Based on their findings of 7 axes*, I wrote a lil' form to enter and toss your own salad. Read up on the code behind this page over at yon blog .

Go forth, make your salad, rate it, and use this form to publish your salad's findings (or is that finding your salad?).

* - Subject to TWTITW's whim and research

Step 1: Data Entry


Step 2: Analize

" />

Step 3: Publish

Final Thought: Projects like this that take about two hours are fun and very gratifying for me. Leveraging existing code bases like Google’s Chart API is a great way to make a friend’s funny blog post into a fun educational experience that looks snazzy. Send in questions or comments!

Alternate way to have Google Analytics track PDFs

1 minute, 26 seconds

Recently I needed a way of tracking PDF viewing on a web site. Google Analytics (GA) does a great job of tracking page views, but they require JavaScript to work. Further, a PDF download will not fire any JavaScript and is only tracked in the web server log, of which GA knows nothing about. What to do?

For starters, you could use GA’s tip: How do I track PDFs on my site? The big caveat with this technique is that the GA JavaScript takes a sec to load, and is generally best kept at the bottom of the page so the user experiences all your content loading “quickly” while the GA JavaScript can load “slowly” with no perceived slowness. Since most modern browsers are not really multi-threaded, moving the GA code to the top will give the user a slow, bad experience.

A better technique is to use JavaScript just like GA recommends, but instead of firing their page view code, fire your own in an iframe! I have a site wide JS file I use which I added this function to (JQuery required):

// way to fire a page request in hidden iframe good for
// doing Google analytic tracking for offsite links, PDFs etc.
function loadFrame(goto_url)
{
   if ($("#loaderFrame").attr("id") != undefined){
      $("#loaderFrame").attr("src",goto_url);		
   }
   return true;
}

Now on any page that I want to track PDFs, I have a link that looks like this:


   PlipBlogger


The net result is that:

  1. link is clicked
  2. loadFrame() gets called
  3. loadFrame() sets the source of your iframe to be pdfTracker, thus loading the iframe with you PDF as it’s query string
  4. the ender user sees none of this and their browser natively handles the PDF download.

On the /pdfTracker page, you would send the user’s browser a small payload include the GA JavaScript to track that this PDF was loaded. When you next got to GA you should see all your PDFs behind the /pdfTracker? URL. Handy!

Fancy Diff

0 minutes, 53 seconds

simplediff
simple diff

Let’s say you’ve edited a long HTML file and want find what’s changed against the original. If you were on the command line, you’d turn to ‘diff’, right? This is all fine and dandy, if not mundane and some what obfuscated as to what has changed.

 
fancydiff
fancy diff

However, if you wanted to view the entire files side by side, have the changes appear in context of the rest of the file and easily see where lines have been removed or add, you would use diff with the flags: ‘-yiEbwB’. Now that is a I can get used to.

 
fancydiff
megawide fancy diff

Now if you had a massive file, say a WSDL (say “wizdull”) file from a certain CRM you use at work and you wanted to compare differences, diff to the rescue again. However because we’re seeing the entire file which is over 450K, the only way to parse through this is via ‘less’. If you couple this with increasing the width to 300 columns, it makes reviewing the changes a breeze. Further, you can search with in the diff by using the search features in less.

diff -yiEbwB --width=300 enterprise.wsdl enterprise2.wsdl |less


Old, broken USB hub = iPod Charger

0 minutes, 14 seconds

I had a an old, powered mini-usb hub that stopped working with my webcam and USB scanner. I was about to toss it when I pondered if it still would would just send power to what ever was plugged in. Sure enough, I now have a wall wart endowed charger for my first gen iPod Shuffle. Handy!

WordPress: Rich Man’s Blog, Poor Man’s CMS

1 minute, 19 seconds

samplepreviewI’m a big fan of WordPress. I use it here, at plip, I use it work, and most of my friends use it, leave one. The other day, while commuting over the bay bridge, my co-worker and I were singing the praise of how easy WordPress is to use not only as a blogging platform, but also as a poor mans CMS. I’m very partial to Xaraya as an awesome, full featured, unstoppable CMS. However, what about when you have just 3 or 5 pages on your entire site? You wanna help out grandma/great uncle/distant niece with their new web site, but you want to let them edit at their whim. WordPress it!

To prove that this would be easy, I timed myself to download, configure and create a fully WordPress powered site. I didn’t do a dry run or prep any downloads or .htaccess rules. My MySQL user didn’t exist. From scratch I:

  • Downloaded WordPress
    wget http://wordpress.org/latest.zip
  • Unzipped it
  • Created the directory structure
  • Made an exception in my mod_rewrite so that you could browse a sub-directory. Otherwise I have WordPress on greedy mode:
            RewriteCond %{REQUEST_FILENAME} !(/blog/simple-wp) [NC]
            RewriteRule   ^/blog(.*) /core/blog$1  [PT]
  • Surfed around and found some random theme to use
  • Spent a sec hacking the header, footer, page and sidebar templates to have the right DOM elements
  • Whipped up 3 pages and set one to be the default home page of the site
  • Spot checked the new site in IE, FF3 and Safari

Total time according to Toggl was 40 minutes. 40 minutes to have your own rounded corner, fully dynamic navigation (1 page = top navigation), no blog in site, site. Awesome!

Click the image to see the site created in this post. Also you can download the modified theme to see how implemented it.

New news, old open source

1 minute, 33 seconds

Hi faithful readers! Notice anything different about the news on ol’ plip? No? Probably not because there’s been none since December of aught 4. It’s no surprise no one noticed. However, to fight the stale news, encourage freshness, thus garner a new, rejuvenated mass of readers, old open source solutions have taken hold of news: WordPress on our LAMP stack running on Linux! Ok, the last two are not new, but WordPress is and all our news will live there.

On deck in the news section are geeky, bloggy types news: PHP tips and tricks, adventures and places to see, media consumed and ramblings. I guess the last one is redundant, as they’ll all start to ramble.

To give you a taste, here’s the PHP tip of the week: I used to use Magpie RSS to consume RSS feeds I wanted to re-syndicate on another site. Though it does have some nice features like normalizing different feed formats and using a cache file, it was a but heavy to just suck in some news for my own use. In this case, I wanted self syndicate to plip’s home page. Enter the simplexml_load_string() function! I can grab the latest news at the top of my home page:

//prep loading blog contents
$blogURL = "http://".$_SERVER['HTTP_HOST']."/blog/feed/";
$blogRSSraw = file_get_contents($blogURL);
$blogXML = simplexml_load_string($blogRSSraw);

And then further down, I can loop through all the entries and do a little formatting to output the latest news for y’all. Note that each child item (a blog post in this case) is it’s own SimpleXMLElement object. Further note that each key value pair (eg title, date etc.) has a magic “to string” function (which as my co-worker and I found can do unexpected things if you’re print_r-ing).

foreach ($blogXML->channel->item as $post){
	print "link."">".$post->title."";
	print "".strftime ("%d %b %Y",strtotime($post->pubDate) )."
"; print $post->description."

"; }

This tricks works great if you’re consuming search results from a Google Mini search appliance, which is where I first deployed this technique.