Custom Imports with the Drupal 7 Batch API

While modules like Feeds can often handle simple import or syncing needs, I often find a client will need more fine-grained control. Maybe they’re grabbing data from an API that uses complex authentication, or from a CSV file over FTP. Maybe they need special logic that you can’t achieve through Feeds Tamper. These tools are great for very standard operations, but it doesn’t take much to make using them more trouble than they’re worth. For special cases, here’s a quick guide to rolling your own custom importer using the Drupal 7 Batch API.

.info File

Like any custom module, you’ll start by creating a new folder under sites/all/modules/custom with the desired machine name of your custom module. In this case, we’ll call it custom_import. In that folder, create a file with the same machine name and the .info extension. Here’s what custom_import.info looks like:

name = Custom Import
description = Demonstration of custom importing with the Drupal 7 Batch API.
core = 7.x

Configuration Form

Create a .module file using the same naming convention of the .info file (i.e., custom_import.module in this case). Everything else will go here. We’re going to need a way to trigger the batch import, so we’re going to create a simple Drupal configuration form. For this, we’ll need to implement hook_permission() to register a permission for who can see the form, hook_menu() to register the menu item and page callback, and finally implement the callback itself. In this case, I decided to call it custom_import_settings(), but it can follow whatever name you like so long as it matches the value supplied to ‘page arguments’ in hook_menu().

function custom_import_permission(){
  $permissions = array(
    'administer custom import' => array(
      'title' => t('Administer custom import settings.')
    )
  );

  return $permissions;
}

function custom_import_menu(){
  $items = array();

  $items['admin/config/system/custom-import'] = array(
    'title' => 'Custom Import',
    'description' => 'Settings page for a Drupal 7 Batch API custom import.',
    'access arguments' => array('administer custom import'),
    'page callback' => 'drupal_get_form',
    'page arguments' => array('custom_import_settings')
  );

  return $items;
}

function custom_import_settings($form, $form_state){
  $form['custom_import_button'] = array(
    '#type' => 'submit',
    '#name' => 'custom_import_button',
    '#value' => 'Execute Custom Import'
  );

  return $form;
}

If you have additional settings that may affect the import, you can add them as fields to the page callback. I often do this when I need a configurable URL or the import is intended to work off of an uploaded CSV file.

Batch Process

Until now, we haven’t done anything terribly complex. We’ve created a custom module, registered a permission, and created a configuration page to launch the import. Now we need to show what happens when the button gets pressed. To do this, we create a _submit() handler for our previous settings form. This will generate an array of batch operations that feed into the batch_set() function. The batch_set() function, in turn, feeds each individual record through our custom batch operation. Finally, we’ll need a function to run when the batch process is finished.

function custom_import_settings_submit($form, $form_state){
  /* Access the file or API endpoint and use it to generate the $operations array. Each item should be an array, with the first element as the name of the op function (e.g., "custom_import_op") and the second element as an array of arguments to pass. This will vary significantly based on the data source and format. For example, here's how we might access a public API endpoint formatted in XML:

  $import_contents = file_get_contents('https://someapi.com/endpoint/xml');
  if (!$import_contents){
    drupal_set_message(t('Unable to fetch import data.'), 'error');
    return array();
  }
  $import_data = simplexml_load_string($import_contents);
  if (!$import_data){
    drupal_set_message(t('Unable to parse import data.'), 'error');
    return array();
  }
  $operations = array();
  foreach ($import_data as $item){
    $operations[] = array('custom_import_op', array($event->asXML()));
  }

  */

  batch_set(array(
    'title' => 'Performing Import',
    'finished' => 'custom_import_finished',
    'operations' => $operations
  ));
}

function custom_import_op($item, &$context){
  if (!isset($context['sandbox']['progress'])){
    $context['sandbox']['progress'] = 0;
  }
  $item = simplexml_load_string($item);

  try {

    /* Most import operations involve nodes, but there's no hard rule that says they must. Here is where you will build and save the data you've imported using built-in Drupal functions. If the remote system uses some form of ID, it often helps to make a field to contain it and reference it to load an existing node back in for updating. This way, you avoid duplication and can refine and re-run your import as often as necessary without cluttering up your content. For example: */

    $existing_nid = db_select('field_data_field_remote_id', 'i')->fields('i', array('entity_id'))->condition('i.entity_type', 'node')->condition('i.bundle', 'custom_content_type')->condition('i.field_remote_id_value', (string) $item->id)->execute()->fetchField();
    if (!empty($existing_nid) && is_numeric($existing_nid) && $existing_nid > 0){
      $node = node_load($existing_nid);
    }
    else {
      $node = new stdClass();
      $node->type = 'custom_content_type';
      $node->uid = 1;
      $node->status = 1;
      $node->language = language_default();
      $node->field_remote_id[language_default()]['value'] = $item->id;
    }

    /* Map each field in $item to the fields in $node before saving. */
    $node_wrapper = entity_metadata_wrapper('node', $node);
    $node_wrapper->field_remote_id->set($item->id);
    $node_wrapper->save();
  }
  catch (Exception $e){
    drupal_set_message(t('Error encountered while importing: !error_message', array('!error_message' => $e->getMessage())), 'error');
  }

  $context['message'] = t('Syncing "!item_name".', array('!item_name' => $item->name));
  $context['sandbox']['progress'] += 1;
}

function custom_import_finished($success, $results, $operations){
  if ($success){
    drupal_set_message('Operation complete.');
  }
  else {
    $error_operation = reset($operations);
    drupal_set_message(t('An error occurred while importing with arguments : @args', array('@operation' => $error_operation[0], '@args' => print_r($error_operation[0], TRUE))));
  }
}

Result

If you’ve done everything correctly, you should end up with a configuration page that contains a button. When pressed, that button fetches the data you specify and processes it one record at a time using the exact logic you need. It’ll even show you a handy progress bar and any errors that may be encountered along the way.

It’s worth noting that this is just a scaffolding. I’ve used the above code in different contexts. Sometimes I separate the try/catch portion of the batch op into its own function so I can also use it during a cron run, for instance, creating a process that runs automatically as well as on demand. Sometimes the data is in an uploaded CSV file or JSON feed. Whatever your need, this should help get you started. Please share any questions or improvements in the comments below.

What’s a Smart Watch Actually Good For? My List After the First Year

IMG_20160315_151401

A lot of folks are still undecided on smart watches. I think we’ve been put off by wearables in the past (e.g., Google Glass), so there’s an understandable reluctance to adopt.

I felt the same way when I picked up my Asus Zenwatch a year ago. I realized going into it that I might end up looking like a huge dork. I always wore calculator watches growing up, though, so that didn’t bother me. If anything, maintaining an appearance of dorkiness has helped in my career as a developer. It’s the same reason I’d rather wear glasses than get laser eye surgery.

Fast forward a year and I can say with confidence that I’d have a hard time going back to life without a smart watch. Is it a must-have? No. As a companion device, anything you can do on your watch you can do on your phone. However, I used to say the same thing about smart phones and computers. It’s not so much about what it can do differently as what it does better. Here’s my list.

  • Not looking at my phone. Really, this is the main benefit. I get notifications all the time. Many of them, like calendar events or website uptime notices (Uptime Doctor via Pushover is very useful), are purely informational. I note them and dismiss them immediately, which takes about one second’s worth of concentration. Taking my phone out for each of these wouldn’t only take much longer by itself, I’d inevitably check half a dozen unrelated apps out of habit.
  • Unlocking my phone. This is another huge time saver. Later versions of Android allow you to set your watch as a trusted device, meaning your phone will automatically unlock while the two are connected. It’s a slight trade-off in security; after all, anybody with both devices has a free pass. On the other hand, it also lets me use a much stronger PIN, since I only have to manually unlock my device once a day. I’d call it a wash for security, but a huge gain for ease of use.
  • Two-factor authentication. I’ve tried many apps for my watch, few of which I kept. Authenticator Plus was the exception. I set it up as a quick launch option on my Pujie Black watch face, giving me access to my two-factor codes with a single tap. As above, it’s a slight loss of security if my watch is compromised, but a huge gain otherwise.
  • Quick text messages. Text-to-speech can produce some embarrassing results if you trust it for long, complex conversations. For a quick little “okay”, “I love you”, or “I’m on my way”, though, it’s a big time-saver. Anything more than that and I typically take my phone out.
  • Taking notes. I once read Getting Things Done by David Allen. And while I didn’t care for the heavily paper-based organization of it all, I did adopt some aspects of the system. Most importantly, I take lots of notes and process them all weekly. The watch is great for that. I often take quick memos while driving just to get an “open loop” out of my brain.
  • Quick questions. Probably my least favorite expression of all time, since folks poke their heads in my office for “quick questions” that typically turn into half-hour conversations. If you need to look up a simple fact, though, it’s a lot faster to just pose it to your wrist. I find myself doing this a lot when watching movies with my wife trying to figure out actor names and the like.
  • Timers. I tell my watch to set a timer at least twice every day, once to steep my tea in the morning and once while using mouthwash in the evening. Sometimes, I use it as a “time out” timer, too, when my four-year-old is misbehaving so I know when to let him get back up.
  • Playing and pausing the TV. I cut my cable a long time ago, so I stream a lot of Netflix, Hulu, YouTube, and Plex. My watch has a convenient play/pause button for each of these, which is handy if you need to get up to get a snack or go to the bathroom. It’s also great for remote parenting: I use it to pause my older son’s shows to let him know it’s bedtime.

So, bottom line, my smart watch has been a great time saver. I take out my phone less often and unlock it faster when I do. I log into websites faster, text faster, take notes faster, look up facts faster, set timers faster, and control my TV faster. Is all that worth the $200+ I paid? Maybe. They’re not for everyone. All I can say is I’d have a hard time doing without mine.

Tomorrow’s Internet

In the 14 years I’ve been doing web development, I’ve seen the internet become so ubiquitous that it’s found its way into every aspect of our lives. It’s humanity’s shiny toy. We fumbled with it for awhile before we figured out how to really use it, and we’re still dreaming up new ways to play with it.

We’re all familiar with the trends that drive the internet’s growth. Over time, computers are getting smaller and faster, more than ever thanks to the rise of mobile computing. New technologies are being produced to take advantage of these ever-expanding capabilities, spurred on by crowd-funding, which is a new idea in and of itself. Eventually, those technologies move from high-priced gadgets for professionals and enthusiasts into affordable products for consumers at large. It’s a self-perpetuating cycle of technological advancement, innovation, and consumer adoption. It’s also the reason science fiction tends to become reality after a few decades.

If you accept these facts, then predicting the future is as simple as examining the emerging technologies of today, overlaying the data processing capabilities of tomorrow, and applying the polish of consumerism. With that logic in mind, here are my predictions for the internet of tomorrow:

  • Wearable technology (e.g., smart watches, Google Glass) will become the norm. A simple headset combined with voice commands, advanced graphical UX, and optionally gloves to provide gesture support will surpass many of the interface limitations of today’s mobile devices. Combined with deep personalization and integration, most consumers will abandon desktops and tablets in favor of these devices.
  • Ubiquitous, high-speed connectivity will become standard to the extent that it is considered a basic utility like water or electricity, or even a fundamental human right. Analog media and separate data delivery standards (e.g., newspapers, cable television, land-line telephones) will continue their slow, painful death until they finally go the way of 8-tracks and floppy disks. Everything will be connected all the time, rendering other methods of information delivery obsolete.
  • Virtual (and augmented) reality will enter the mainstream. Facebook’s $2-billion bet on Oculus Rift will spur other big name companies to develop in this arena. Combined with the rise of wearable tech and ubiquitous connectivity, standards will emerge that allow ordinary users to define elements of augmented reality that provide meta-experiences to VR-enabled headsets. You won’t have to go to a restaurant’s Facebook page to Like it any more; you’ll just press the virtual Like button on the table in front of you.
  • Like humanity’s favorite science fiction, time and distance will become less relevant. With the advent of augmented reality, you will be able to sit in a cafe and have a chat with the avatar of a friend half the world away. You may even begin to see businesses set up identical layouts so that users can enjoy similar experiences despite being in different physical locations. Public events and vacation destinations will become accessible for a price, allowing you to project virtually to live areas of the real world.
  • With the rise of affordable 3D printing, the line will blur even further. Artists will create items in virtual space that are then reproduced on demand by machines. Entire storefronts will exist that are empty of real products. People will browse items and make selections in virtual space, then have the item printed in real space when they buy it. The same storefront will serve virtual users, who can then pick up their printed selections at the store’s local chain.
  • Like the internet has always done, augmented reality will pose challenges for the legal system. Augmented space will feature some sort of overarching registry system similar to domain names, leading to conflicts with existing property laws when businesses start registering a virtual presence on top of competitor’s buildings or in people’s homes. Copyright infringement cases will take on a new substance (pun intended) when the items being copied can be manufactured into the real thing.
  • As virtual space gains ground against real space in terms of relevance, society will face new hurtles. Political debates will center around the virtualization of classrooms, marriages between people who’ve never met in real life, and tax breaks for small businesses to give them a chance against hyper-efficient, virtualized mega corporations. Popular contempt for wearable tech will get turned on its head, leading to bias against the disconnected who are blind to AR. Psychologists will have a host of new problems to deal with involving personal identity, reclusiveness, and social disorders.

What does it mean for us web developers? On the bright side, our jobs are safe. Demand for primitive internet presences (e.g., websites) are sure to remain high for backwards compatibility and for users who prefer simpler, 2D experiences. However, much like responsive design, we’ll need to become masters of new standards, such as 3D modeling, in order to meet the new demand for augmented reality elements. It won’t be enough to know HTML any more; we’ll need to become the architects of our clients’ entire virtual presence or risk being left in the wake of the new technology.

Make no mistake, though. These things aren’t just coming; they’re already here. Some of this may sound like Jetsons-esque “flying cars” talk, but the technology for everything I’ve mentioned already exists. It’s only a matter of time before it gets combined into something resembling augmented reality. When it does, it’ll be a brave new world, a lot like the old one but with a tantalizing virtual coating.

Remove Node Lists from Taxonomy Pages in Drupal 7

If you use Drupal, you can’t help but love the Taxonomy module. After all, categorizing content goes hand in hand with creating it or managing it, and Taxonomy gives us a nice, flexible framework, especially when you throw fields into the mix.

Unfortunately, every taxonomy term page comes with a default list of all the content classified with that term. Sometimes you might want this, but sometimes you might not. I’ve run into a few situations in which I needed tighter control over the term page display, like that provided by Views, without wanting to override the entire page.

Here’s the snippet of code I created to deal with those situations. Just put the following in the template.php file of your theme folder:

function THEMENAME_preprocess_page(&$vars){   if (arg(0) == 'taxonomy' && arg(1) == 'term' && is_numeric(arg(2))){     unset($vars['page']['content']['system_main']['nodes']);     unset($vars['page']['content']['system_main']['pager']);     unset($vars['page']['content']['system_main']['no_content']);   } } This will remove the default content list, as well as the pager and "No content has been classified with this term" text if either is present. On a default installation, that sets the page back to a blank slate, like an ordinary content page, so you can play around with it from there to your heart's content.

jQuery Default Form Text Function

Often times, in place of labels, designers will choose to put default text in a form field to denote the information that it should contain. This is good design and usability, but it also creates some functional problems. Obviously, the text should disappear when the field is in focus, but what happens if the user clicks away without entering anything, or submits the form before filling in the field?

This handy jQuery function solves those problems. Default text will appear in the field whenever it is empty and not in focus. The user can click it to remove the text, then click away to reveal it again. If the field has been filled out, nothing happens. As a bonus, if the form in question is submitted with default values, those are cleared before the submission takes place to allow for proper validation.


function default_text(selector, text){
  element = $(selector);
  if (element.val() == ''){
    element.val(text);
  }
  element.focus(function(){
    if ($(this).val() == text){
      $(this).val('');
    }
  }).blur(function(){
    if ($(this).val() == ''){
      $(this).val(text);
    }
  }).parents('form').submit(function(){
    if (element.val() == text){
      element.val('');
    }
  });
}

$(document).ready(function(){
  default_text('#your_form_element_id', 'Your default text');
});

How to Enable Ubercart Product Image Zoom with Gallery Thumbnails

In my experience, ecommerce websites are their own beasts, with all sorts of specialized functionality that doesn’t get requested on other websites. One of the most frequent requests, and often times one of the most difficult to fulfill, is enabling product image zoom functionality along with gallery thumbnails that can seamlessly swap out the main image. Here is one way to accomplish that on a Drupal 6 website with Ubercart.

  1. Install the necessary modules. For this solution, you’re going to need CCK, ImageField, ImageCache, and Cloud Zoom (I used to prefer jQZoom, but found it to have less compatibility in WebKit browsers).
  2. Set up your image field. The code provided below is designed to work with the default image handling field creating by Ubercart, but it can work just as well with any CCK image field. Just be sure to swap out the $node->field_image_cache variable with the name of your own field if you’re using something different.
  3. Set up your ImageCache presets. You’re going to need three. Again, the solution below is tailored for use with product images in Ubercart, so I’ve named them accordingly. “product_full” is the main image displayed to the user, “product_zoom” is the larger image shown in the zoom window, and “product_thumbnail” is the gallery thumbnail that allows the user to swap out the main image. You can name these whatever you like; just be sure to change their names in the code as well.
  4. Stop the field from displaying. By default, Cloud Zoom can automatically display the image and zoom window without any need to mess with template files. This is very handy, but it doesn’t support gallery thumbnails, so we’ll need to disable the default behavior. Go into Administer > Content Management > Content Types > Manage fields > Display fields and change the Label, Teaser, and Full node drop-downs for your image field to . Be sure not to click the Exclude boxes, or else the template modifications below will not work properly.
  5. Add this code to your theme file. In the case of Ubercart, this goes in node-product.tpl.php where you’d like the images to appear. If you’re using a different content type, use that in place of product. And again, be sure to switch out the image field variable name or the ImageCache preset names if you’re using something different.

    &lt;?php<br /><br />
    
    drupal_add_js(drupal_get_path('module', 'cloud_zoom') . '/cloud-zoom/cloud-zoom.1.0.2.min.js');<br />
    drupal_add_css(drupal_get_path('module', 'cloud_zoom') . '/cloud-zoom/cloud-zoom.css');<br />
    if (is_array($node-&gt;field_image_cache) && count($node-&gt;field_image_cache) &gt; 0 && strlen($node-&gt;field_image_cache[0]) &gt; 0){<br /><br />
    
      // Display the primary image.<br />
      echo l(theme('imagecache', 'product_full', $node-&gt;field_image_cache[0]['filepath'], $node-&gt;field_image_cache[0]['data']['alt'], $node-&gt;field_image_cache[0]['data']['title']), imagecache_create_path('product_zoom', $node-&gt;field_image_cache[0]['filepath']), array('attributes' =&gt; array('class' =&gt; 'cloud-zoom', 'id' =&gt; 'zoom1'), 'html' =&gt; TRUE));<br /><br />
    
      // Display the gallery thumbnails.<br />
      $num_images = count($node-&gt;field_image_cache);<br />
      if ($num_images &gt; 1){<br />
        for ($i = 0; $i &lt; $num_images; $i++){<br />
          echo '&lt;a class="cloud-zoom-gallery" href="' . base_path() . imagecache_create_path('product_zoom', $node-&gt;field_image_cache[$i]['filepath']) . '" rel="useZoom:\'zoom1\', smallImage:\'' . base_path() . imagecache_create_path('product_full', base_path() . $node-&gt;field_image_cache[$i]['filepath']) . '\'"&gt;' . theme('imagecache', 'product_thumbnail', $node-&gt;field_image_cache[$i]['filepath']) . '&lt;/a&gt;';<br />
        }<br />
      }<br />
    }<br /><br />
    
    ?&gt;
  6. Stylize to taste. With a bit of added coding and some CSS, you can make the final product display however you like. For example, I like to put the gallery thumbnails into an unordered list and float them beneath the main image for ease of usability. Your needs may vary.

How to Add Variations to a Drupal Theme

Recently, I did work for a few clients who needed several very similar websites launched in a single project, each of which using an almost identical (yet subtly different) theme. As I started configuring them on Drupal multi-site installations, it got me thinking: Is there a way to take advantage of the same sort of code reuse within a theme?

There are already options for this, of course, such as sub-themes or the Color module. In my case, however, I decided to try something a little different: I used a custom theme setting to add a CSS class to the body tag, then created the theme variations with pure CSS. Here’s how I did it.

Step One: Set Up the Advanced Theme Setting

In case you’re a Drupal themer who doesn’t know this trick, it’s a life-saver. You can configure your theme with a form to collect custom settings, then use those settings in the theme itself. I like to use this for things like phone numbers that don’t deserve their own block region but need to be configurable by the client nonetheless.

There’s a great Drupal article on advanced theme settings, which I won’t bother repeating. As far as theme variants go, all you have to do is include the following code in the theme_settings.php file of your theme folder:

&lt;?php

function themename_settings($saved_settings){
$defaults = array(
'variant' =&gt; 'default'
);
$settings = array_merge($defaults, $saved_settings);

$form['variant'] = array(
'#title' =&gt; t('Variant'),
'#type' =&gt; 'select',
'#default_value' =&gt; array($settings['variant']),
'#options' =&gt; array(
'default' =&gt; 'Default',
'variant_1' =&gt; 'Variant #1',
'variant_2' =&gt; 'Variant #2',
'variant_3' =&gt; 'Variant #3'
)
);

 return $form;
}

This will create a drop-down selection menu on your theme configuration page that allows you to select the desired variant. Be sure to change the keys and values in the #options array to include the CSS class and variant names you want.

Step Two: Hook the Variant Setting into the Template Files

Now that the variant can be defined, it’s time to dynamically include it in your template files. This is accomplished with the theme_get_setting() function. Include the following code at the top of your page.tpl.php file (and any other relevant template files):

&lt;?php
$variant = theme_get_setting('variant');
if ($variant == 'default'){
unset($variant);
}
?&gt;

Then, on the body tag in each template, include code to insert the variant as a CSS class:

&lt;body&lt;?php echo ($variant) ? ' class=" . $variant . "' : ''; ?&gt;&gt;

If you want, you can do other useful things with the $variant variable. For example, I took it a step farther and created image subfolders at theme_folder/images/$variant. That way, if I had images that needed to vary, all I had to do was name the images the same and include $variant in the image src attribute.

Step Three: Add CSS to Customize Each Variant

Once the body is being classed according to the theme variant setting, you can do whatever you like to customize each variant. Simply add CSS to your style.css file in order to tweak the theme’s appearance according to the new body class. For example, you might adjust the font face and color of each variant:

#content { color:#000; font-family:Arial; }
.variant_1 #content { color:#F00; font-family:Helvetica; }
.variant_2 #content { color:#00F; font-family:Verdana; }
…

Possible Uses

The main use of this technique is to provide a fast, easy way to create minor variations in a single Drupal theme. You might add some seasonal stylization for a holiday variant, for example. Or, as in my case, you might have a few small differences between sites using the same theme and want to keep a common code base for ease of maintenance.

Anything more than that and you’re probably better off using one of the aforementioned techniques, such as sub-theming or the Color module. It’s really just a matter of how different your variations are going to be.

Why Never to Launch a Site on Friday Afternoon

Imagine sitting in mission control as a rocket is launched into space. The countdown initiates. “10, 9, 8…” The boosters engage. The astronaut comes over the com to confirm final checks. “7, 6, 5…” Everything is a green light. The launch crew sits on the edge of their seats. “4, 3, 2…” The moment is finally upon us, and then… quitting time. Just as the rocket is about to launch, everyone gets up from their desks and heads home for the weekend.

Sounds pretty strange, doesn’t it? Why would anyone do something so reckless? Doesn’t it make more sense to give launch the time and attention it deserves? After all, if everyone walks away right before lift off, they may miss a critical moment that could make or break the whole operation.

That, ladies and gentlemen, is exactly what happens when you try to launch a website on a Friday afternoon. You initiate the countdown and walk away, naively trusting that everything will go smoothly. No verification of success. No post-launch QA. You just push the button and go home for the weekend.

You’d think this would be common sense. You’d think any good web development company would know better than to do it like this. Regrettably, you’d be wrong. This has happened at every web shop I’ve worked in, not just once, but often. Clients have been allowed to say the word “Go” at the worst possible moments, thinking it’s as simple as pushing a button and letting everything magically work out.

The thing is, clients don’t know any better. They don’t do this for a living. It’s the job of their web development team to explain that launching a website is a non-trivial process that takes time and attention, that launching without a human being present to fix things when they inevitably go wrong means they’re stuck with a broken website all weekend, that unexpected glitches must be factored in, and that it’s a bad idea for their company and their brand to do otherwise. Anything less is reckless.

The alternative, of course, is a broken website that languishes for days while clients gnash their teeth, pull their hair out, and make angry phone calls at 3:00am because their brand new website isn’t working right. This hurts not only the client’s business, but the web shop’s business, too.

So the next time you’re working with a client who insists on launching late on Friday (or you happen to be that client), do everyone a favor. Stop, breathe, and ask if it can wait until Monday. 99% of the time, it can, and as I’ve said, it really, really should, for everyone’s sake.

Code HTML Email Templates by Breaking the Rules

If you’re anything like me, the first time someone told you to make an email template, you thought, “Piece of cake!” After all, emails use HTML (or the types that needs templates do, at any rate), and HTML is a cinch, right?

As it turns out, you’d be wrong. HTML is a cinch, but the truth is, the more you know about proper HTML coding standards, the harder it is to make a functional email template. That’s because the only way to write them is to break all of those pretty rules you spent so long mastering. Here’s the breakdown:

  • Only use inline styles, if you even use them at all. External or embedded stylesheets will almost certainly be stripped out. Some email clients allow you to declare your styles in the document, but support for this is spotty at best. The only way to guarantee that your styles get parsed is to place them into style attributes on each and every element. Even then, be aware that many useful style attributes (background and position come to mind) are not widely supported.
  • Use tables for layout. As I mentioned above, email clients don’t generally support CSS positioning. The best way to ensure consistent display across clients is to use tables.
  • Exclude the HTML, head, and body tags. A lot of email clients strip them out anyway.
  • Exclude forms and JavaScript. Email clients will ruthlessly omit your dynamic functionality, and your email’s spam profile will be much higher for the attempt.
  • Don’t worry about SEO. Emails don’t get indexed, anyway, so the code can (and will be) as cluttered as you like. Make sure it can still be parsed by screen readers, of course; we wouldn’t want to turn away disabled subscribers. Just don’t concern yourself with making an email rank.

Why are the rules set this way? I blame the proliferation of spam and the lack of standards between email clients. HTML is great, but it gives spammers too much control, and the knee-jerk response is to strip out everything until you’re left with… well, this. Don’t believe me? Try testing your email template on the dozens (probably hundreds) of email clients in use. And you thought cross-browser compatibility testing was bad…

So, blasphemy though it may be to us coding purists, these are the standards you have to keep in mind when coding email templates. As a rule of thumb, just break every rule in the book and you’ll be on the right track. And if that makes you feel dirty, you can always read a good book on HTML 5 for absolution. 😉

Elsewhere: So You’ve Got a Drupal Website… Now What? (Part Two: Learning the Lingo)

I just posted So You’ve Got a Drupal Website… Now What? (Part Two: Learning the Lingo) on ClickOptimize.com:

In this series, I walk you through the basics of using your shiny new Drupal website. In part one, I explained what Drupal is and why it’s awesome. We’ll get into working with Drupal in the next section. Before we can walk the Drupal walk, however, we need to learn to talk the Drupal talk.

Read the whole article on ClickOptimize.com.