12% combined

The USDA standards for processed, packaged “Salisbury steak” require a minimum content of 65% meat, of which up to 25% can be pork, except if defatted beef or pork is used, the limit is 12% combined. No more than 30% may be fat. Meat byproducts are not permitted; however, beef heart meat is allowed. Extender (bread crumbs, flour, oat flakes, etc.) content is limited to 12%, except isolated soy protein at 6.8% is considered equivalent to 12% of the others.

Wikipedia, Salisbury steak.

The Most Wonderful Way to Augment Reality

I’ve been working at building augmented reality tech for a while. It’s so math heavy that getting the minimum viable product out the door is excruciating. So the project has stalled.

In the mean time, I’ve learned a little about lean. And I’ve come to realize that I’ve been building a solution to too many problems. Worst of all, I’ve been split between the problems of too many people. So it’s time to regroup. Figure out who matter, and what matters to them.

In the broadest sense, there are three groups in augmenting reality: the audience, the artists, and the patrons. Ultimately the driving force is the taste of the audience. We’ll only give our attention to the most beautiful and useful. But at the moment, the landscape is so sparse that the only projects that are seeing life are ones with good backing or a driven creator. The sponsors tend to be limiting their support to marketing efforts, movies and brands are hunting cool, there isn’t much sponsorship of abstract art or niche business tools. But the budgets of these sponsors are constrained by the domain of the possible. Artists who know the field are the best guides. Given the latitude, I think they’ll prefer to make sponsored projects with the best tools they can find.

So that’s where I want to go. I want to build the best tools an artist could wield. So what I need to do now is learn what sucks most about today’s tools. I need to know how projects are being built today, how they are designed, constructed, delivered and maintained. Would the most good from a better CAD, framework, toolkit or platform? What’s the stack of tools to make most beautiful augments? What’s the most flexible stack for getting a working solution turned around quickly?

Deploying with Bundler and Capistrano

When I stumbled across Richard Huang’s DRY bundler in capistrano, I got excited thinking I’d learn even more about bundler’s internals and maybe even a few more tricks with using bundler and capistrano together. Unfortunately, all I really got was to use this in my deploy.rb:

require 'bundler/capistrano'

Now that will probably be enough for most people. But if you’re already using bundler 1.0 with capistrano, you probably aren’t most people.

The bundler team has put a lot of work into documenting every little part of the project, so after we’ve required bundler’s capistrato recipe, let’s grab the task explanation:

% cap --explain bundle:install
------------------------------------------------------------
cap bundle:install
------------------------------------------------------------
Install the current Bundler environment. By default, gems will be installed to
the shared/bundle path. Gems in the development and test group will not be
installed. The install command is executed with the --deployment and --quiet
flags. You can override any of these defaults by setting the variables shown
below.

  set :bundle_gemfile,      'Gemfile'
  set :bundle_dir,          fetch(:shared_path)+'/bundle'
  set :bundle_flags,        '--deployment --quiet'
  set :bundle_without,      [:development, :test]

You’re probably going to want to start playing around with bundler 1.0 on your dev machine before you deploy it on production. Since bundler 0.9 doesn’t support all the useful flags in 1.0, so we need to empty a couple of these default settings to make it work:

set :bundle_dir, ''
set :bundle_flags, ''

Now you can have 1.0 on your dev machines and test that this deploys properly.

If you’re like me, you forgot that need a few gems in your development group on your staging machine. So just set the groups you can live without:

set :bundle_without, [:test]

Of course, our production environment doesn’t actually need anything in the development group, and we use separate production and staging cap tasks to load settings for our different environments. So we just modify those to have the right bundler groups:

task :staging do
  set :bundle_without, [:test]
  # other staging specific settings, like
  # set :rails_env, 'staging'
end

task :production do
  # this is the default, but left here for reference
  set :bundle_without, [:development, :test]
  # other production specific settings, like
  # set :rails_env, 'production'
end

Once you’ve got everything tailored and working, you can update bundler on your boxes and restart the app servers. You could leave the bundle_dir and bundle_flags settings the way they are, but I strongly recommend using the --deployment flag to run in deployment mode, the --quiet flag to stop printing so much information, and a bundle directory in your shared_path instead of deployment mode’s default of vendor/bundle.

You’ll also want to start using bundle package to include all the gems you need into vendor/cache. Check that into your in your version control to make the deployment process even better, you won’t even have to talk to rubygems when you run bundle install during a deploy.

If you like more technical documentation, bundler goes into even more depth in its manual pages for bundle-install(1) and bundle-package(1). They explain in depth how deployment mode works differently, how conservative updating works, how gems are cached and quite a bit more. If you find yourself wanting even more control than I’ve explained here, that’s where I would look.

Your New New Web Identity

David Recordon just let us know about a little strawman proposal he’s calling OpenID Connect. It’s not exactly perfect, but it’s a good jumping off point for the ideas that are shaping the next version of OpenID. And I love that he’s just throwing some ideas out there. It’s really in the original spirit of Request For Comments. There’s even code in there!

Most noticibly, the proposal guts most of the work in OpenID, replacing its discovery and security bits with LRDD and OAuth 2 respectively. LRDD is a rewrite of the discovery process used in OpenID 2 intended to be more modular and simpler. OAuth 2 is also a rewrite, but not of existing OpenID parts. Instead, it takes the OAuth concept and makes it easier for to write clients, among many other changes. But with OAuth 2, you can just use curl to get at APIs instead of having to dig into HTTP headers. Having OAuth 2 underneath OpenID should make it much easier to write clients that work with OpenIDs.

There are some other changes as well. OpenID currently lets you delegate your OpenID from any web address to any OpenID server. This is pretty much only used by early adopters who want vanity OpenIDs. That’s how I’ve got my OpenID set up today. But really it provides almost no benefit with the new disco process. And most people are using an insecure web address to delegate from, making them vulnerable to well known weaknesses in OpenID. The consesus is that all future OpenID versions will have to mandate TLS to keep things safe. If you’re enough of a power user to limit your vanity web address to HTTPS, you’ll have no trouble setting up your own OpenID Provider as well. So losing delegation is a non-issue.

But the OpenID Connect proposal has one change that I can’t talk myself into to supporting. It introduces a very simple User Information API that provides the basic personal information every site needs for registering an account. The problem is that we already have the Simple Registration Extension and Attribute Exchange. Instead of reusing the format of either, it introduces another new one.

Instead of inventing another new format for the next version of OpenID, I’d much rather it have a common way to embed existing formats like hCard, vCard, FOAF and PoCo. Like YADIS so long ago, we’ve got a bunch of competing tools for the same job. So let’s build a way for them to compete instead of one more extremely specialized solution. LID and DIX may no longer be with us, but OpenID incorporated their best ideas. Maybe we could call it Yet Another Personal Information Schema System? Nah.

I figure X/JRD gets us most of the way. Webfinger demos show that we can already do this stuff by just pointing around. But what’s missing is a way to inline the data so clients don’t have to keep making requests to get standard data. I imagine it could be as simple as:

{"user_id":"https://graph.fb.me/24400320",
 "url":"http://fb.me/davidrecordon",
 "link": [
   {"rel":"http://portablecontacts.net/spec/1.0#me",
    "href":"http://poco.fb.me/davidrecordon",
    "entity":
      {"entry":
        {"id": "692",
         "displayName": "David Recordon",
         "name":
           {"familyName": "David",
            "givenName": "Recordon" },
         "emails":
           [{"value": "recordond@gmail.com",
             "type": "home",
             "primary": "true" }],
          "photos":
            [{"value": "http://pics.fb.me/davidrecordon",
              "type": "home",
              "primary": "true" }]}}},
   {"rel":"describedby",
    "type": "application/rdf+n3",
    "href":"http://foaf.fb.me/daveman692",
    "entity": "You get the idea"}]}

Specifically, this would just be an OAuth API that provides X/JRD data to authorized clients. The only extension is to add an entity element to a resource descriptor link that includes the copy of the linked resource.

Two things complicate a solution like this. First, encoding is hard. There are four main kinds of data that people will want to embed, JSON, XML, binary data, and everything else. XML and JSON are special because they could concievably be included inline in XRD and JRD respectively. And really, that’s the ideal because no one likes having to pull out more than one parser just to deal with some data. I’m not entirely sure how to handle this, but formats that have both XML and JSON would be very nice. So Portable Contacts gets extra points here.

Binary and other data still make trouble though. What’s the best way to signal that you’re using Base64? Is it enough to just use the type of the link? Only some implementation experience will really tell. Singpolyma has done some fancy things with data: URIs, so that may be a starting point for binary.

The other issue is that web resources are more than just data. Good HTTP servers provide all sorts of metadata about when the resource was updated, how long to cache, &c. The simplest solution would be to just include the entire HTTP response as a string, but that’s certainly more trouble than it’s worth. CloudKit handles this by tacking on etag and last_modified values. Seems like a sound blueprint to me.

If OpenID adopts this sort of solution, it’s not just good for OpenID users. Everything else that uses XRD would benefit. If you want to have standard data publicly available, you could have that on your public XRD, while more sensitive things would be available to clients you trust. And maybe you could finally start doing more with your web identity than just log in.

Walgreens:

If I penned “I’m Joseph. Thank you for allowing me to serve you today.” onto a handwritten receipt for a client, it would be super creepy. That you write it in cheap register monospace on every single receipt doesn’t make it cool. It’s just soulless, institutionalized super creepy.

Unsolved Problems

This week I’ve had a bunch of problems I haven’t finished solving. So instead of writing a handful of walkthroughs, I thought I’d write about all these issues.

For years now I’ve wanted to set up my own chat server so you can message me at joseph@josephholsten.com instead of through Google, Facebook or any of the other IM services. Recently I also found a great looking framework for accessing your XMPP server from a website called Speeqe, and I’d like to give that a try as well. Since I just got a hand-me-down desktop machine, I thought I’d also try my hand at OpenSolaris. It has a futuristic package management system called IPS, but unfortunately there doesn’t seem to be any XMPP server available through the official Sun package repositories or any of the community managed repos. So while IPS is the the most modern package system for OpenSolaris and I’d like to use it exclusively, that doesn’t seem like an option.

The options I haven’t tried are to build from the official source, use old SVR4 packages, or use a weird source based system called a consolidation. Sun maintains one consolidation called Sun Freeware which contains the trusty ejabberd. But unlike the source based package management systems I’ve used, you don’t get the option of just installing the package you want plus dependencies. So at some point I’m going to have to build a ridiculous number of projects just to get the one I want. I’ll give this vaguely official way a shot first before I try anything else.

I’m also trying to get back up to speed with the Open Stack standards I love playing with. So I put some time into updating my webfinger toolkit to work with what’s live at google right now. I’ve got it working, but I discovered a little nit about how XML namespaces handle unprefixed attributes that surprise. I’m investigating further, but it worries me for the compatibility of XRD parsing implementations. But once I get that clarified, I’ll be starting on a prototype of a XRD provisioning service.

While I was getting that webfinger project working, it occurred to me that I should be testing my code against Ruby 1.9 by now so I can post my projects in Is It Ruby 1.9?. So I gave multiruby a try, then played with rvm, but couldn’t convince 1.9 to install rubygems. Ruby says it’s got issues finding the _rb_Digest_MD5_Finish symbol in digest/md5.bundle, but it may as well be speaking Hungarian for all that means to me. After giving up, I installed the macports ruby19 package and it works flawlessly. I did manage to run nm on the bundle in both the broken and the working installs but the symbol tables look the same. One lead I haven’t followed up on is that rvm is using ruby-1.9.1-p378 while macports uses ruby-1.9.1-p376. At some point I’ll be comparing the build process used by rvm to the macport to see if I can divine anything else.

Instead of Quitting

  • I read this.
  • I smoked.
  • I started to cry.
  • I started to argue.
  • I poured another whiskey.
  • I followed the links.
  • I smoked another cigarette.
  • I poured another whiskey.
  • I started skype hoping my friend who goes to meetings was online.
  • I got pissed off skype took so long to start.
  • I got pissed off they weren’t on.
  • I thought about messaging them on facebook.
  • I gave up.
  • I followed more links.
  • I smoked three more cigaretes.
  • I poured another whiskey.
  • I thought this wasn’t worth posting.
  • I gave up.

[Link updated to an archived copy. -j 2014-10-12]

Page 2 of 6