Apr 17

If information is truth, then are we going to be less prudish in the future?

Tech with tags: 1 Comment »

I always enjoy it when Abe Fettig is in town, as we get some time to catch up. At one point in the conversation we were talking about how public everyone is these days. For example, this post will be archived in systems for eternity. If I say something rude about Abe, our great grandkids may know about it :)

As I think about the current political situation, and how prudish it all is … in the US at least, I wonder what the future lies.

What will happen when the presidential candidates are part of the Facebook generation and the paparazzi (at this point, every journalist it seems) will hardly have to dig hard to find some nude inhaling?

If the information is out there that everyone has skeletons of some kind (including the journalists too… back at ya) then maybe it won’t seem like such a big deal? Maybe we won’t expect our politicians to be Clark Kent? Then, if they aren’t pretending to look like Clark, they won’t fall as hard as Elliott “holier than thou, oops” boy.

Facebook now has the “groups” functionality, and I am curious how many people have taken the time to carefully pidgeon-hole their “friends” into these groups. For one, it is a lot harder to do so when you have a few hundred than if the functionality was in there from the start. It is like having to go through all of your photos and tagging them one by one. Everyone wants these groups, to separate the best buds from the work acquaintances, or just the cricket lovers from the ACDC fans. I am curious to see how people use it, or if we are at a point where the kids let it all hang out there and don’t care.

It is one thing to not care when you are a 16 yr old rocker-wanna-be, but when your grandkids can Google your information? Or you run for office? We’ll see how much people like:

But Daddy, it looks like you did a huge amount of ganja, so why can’t I smoke a bowl?

Apr 16

% history meme as boring as history class?

Comic, Tech 3 Comments »

History Meme

The new rage seems to be piping this to your blog:

history | awk '{a[$2]++}END{for(i in a){print a[i] " " i}}' | sort -rn | head

It reminds me of history class at school. Always painfully boring, which is such a crying shame, as history itself is fascinating. How school managed to take the subject that could be so enjoyable, and such a tie in to all other subjects, and instead make it incredibly boring is a crying shame.

The only thing that could put me to sleep faster would be:

148 ls
83 cd
29 download_saucy_mare_pictures
28 vi
24 ping
24 git
16 whois
15 wget
13 sudo
12 rm
Apr 16

Gears Future APIs: Database 2.0 API meshes with HTML5 Storage API

Gears, Tech with tags: , , 4 Comments »

Aaron Boodman wrote a fantastic post on Gears and Standards which I am very passionate about myself.

In it he talks about the HTML5 Storage API and how we are all working together to unify the database access semantics.

You can see the Database 2 API which aims too:

  • Enable Javascript developers to easily write code that works with both Gears and browser database APIs
  • Reduce developer “mind-print” by implementing the same API that is available in browsers
  • Support the proposed HTML5 database standard with an implementation available for all browsers that Gears supports
  • Implement an asynchronous API that can be called from the UI thread without freezing the UI
  • Implement a synchronous API to simplify usage inside workers
  • Implement a thread pool abstraction that can be used in other modules for asynchronous operations (bonus)
  • build a new module from scratch using the new Dispatcher model (bonus).

It would allow you to write code such as:

var dbman = goolge.gears.factory.create('beta.databasemanager');
var db = dbman.open('pages', '',
  'Collection of crawled pages', 3000000);
function renderPageRow(row) {
  // insert page row into a table
function reportError(source, message) {
  // report an error
function renderPages() {
  db.transaction(function(tx) {
    tx.executeSql('CREATE TABLE IF NOT EXISTS Pages(title TEXT, lastUpdated INTEGER)', 
    tx.executeSql('SELECT * FROM Pages', [], function(rs) {
      for(var i = 0; i < rs.rows.length; i++) {
function insertPage(text, lastUpdated) {
  db.transaction(function(tx) {
    tx.executeSql('INSERT INTO Pages VALUES(?, ?)', [ text, lastUpdated ],
      function() {
        // no result returned, stub success callback
      function(tx, error) {
        reportError('sql', error.message);

There is a full technical design, so you can get involved and take part in the open source / open community process that we have going on in Gears land.

I will again end with my visualization of the zipper :)

Apr 15

Yup, ads are on Twitter alright

Tech with tags: , No Comments »

Twitter Ads

Some people, apparently erroneously, said that Twitter is testing ads mid-stream just like clients such as Twitterific do.

I didn’t think this was new, as when I look at my stream I see something like that image above. Isn’t it already ads? :)

Twitter is the watering hole for those of us who don’t work at the same company, and we are Beacon’ing all the time.

NOTE: This isn’t a bad thing at all. And, I know that I do it all the time!

Apr 15

Consilidation in the Open Source Java Stacks?

Comic, Java, Open Source, Tech with tags: , , 2 Comments »

Dealing with Java CIOs

I was talking to a friend that does a lot of work in the realm of Open Source Java. He is someone who talks to people high up in the chain, and discussed how a lot of the CIO folks are getting a bit confused with the offerings. They had gotten used to JBoss. And, now they get Spring. But, they keep getting bombarded with more things. Next they had Mule, and Groovy.

Get some of these guys in a discussion about Mule vs. ServiceMix and they froth. Spring did something smart via Spring Integration, but maybe it is time for some consolidation? SpringSource + MuleSource + G2One? SpringyMule?

Apr 14

The future of the Mobile Web is strong

Mobile, Tech, Web Browsing with tags: 5 Comments »

Mobile Web

Russ Beattie has closed up shop for Mowser and people are rushing to declare the death of the mobile Web.

I like Russ, and was glad to see him back on the scene and blogging a storm, even if he can be a touch offensive from time to time ;)

But, just because he couldn’t find the right niche for Mowser, doesn’t mean the “mobile Web” is dead before born.

Take a look at what he really said:

In other words, I think anyone currently developing sites using XHTML-MP markup, no Javascript, geared towards cellular connections and two inch screens are simply wasting their time, and I’m tired of wasting my time.

I agree. Where are the mobile apps today? They are the iPhone specific ones, and a few stripped-down versions. The mobile Web is growing strong from where I sit. I just have to look around at how my own wife uses her laptop less and less, and her mobile browser more and more.

I am so bullish about the Web on the phone that I believe it will be THE platform for building mobile applications in the future.

If you are a hardcore mobile app builder you may snortle a little. Really? Cheesy Web technology can compete with rich application frameworks? Never.

They can, and they will. I was listening to someone talking about the battle of IPX versus NetBEUI. The viscous battle between Microsoft and Novell. This person said: “If you had told me that this TCP/IP thing would beat both of us I would have laughed in your face”. Some crappy thing uses in academia that doesn’t have all of the features that we do? In the battle of IPX and NetBEUI, TCP/IP won.

It will keep winning, and it will come to win in the mobile world. This is why I am excited about Gears for Mobile, and any other work that will come through in HTML 5 and browsers such as Mobile Safari.

It may take awhile, but would you really bet against it? The mobile Web will just be the Web. We will have limitations of course. 3G will take awhile, and the size of screens isn’t going anywhere until we have the dream of projection into your eyes and such.

Would you bet against it?

Apr 14

Keys to the Google App Engine

Comic, Google, Tech with tags: 4 Comments »

App Engine Locks

It was quite fun to see, right after Tim O’Reilly pondered the lock in strategy for App Engine, that Chris Anderson posted that he had ported the SDK to App Drop.

Now, there are some concerns here as the SDK itself isn’t built for performance, security, etc…. but this is a great start in a very short period of time, and it shows where people can take it.

Waxy has a good write up:

This proof-of-concept was built in only four days and can be deployed in virtually any Linux/Unix hosting environment, showing that moving applications off Google’s servers isn’t as hard as everyone thought.

How does it work? Behind the scenes, AppDrop is simply a remote installation of the App Engine SDK, with the user authentication and identification modified to use a local silo instead of Google Accounts. As a result, any application that works with the App Engine SDK should work flawlessly on AppDrop. For example, here’s Anderson’s Fug This application running on Google App Engine and the identical code running on EC2 at AppDrop.

Of course, this simple portability comes at the cost of scalability. The App Engine SDK doesn’t use BigTable for its datastore, instead relying on a simple flat file on a single server. This means issues with performance and no scalabity to speak of, but for apps with limited resource needs, something as simple as AppDrop would work fine.

Chris said: “It wouldn’t be hard for a competent hacker to add real database support. It wouldn’t be that hard to write a Python adapter to MySQL that would preserve the BigTable API. And while that wouldn’t be quite as scalable as BigTable, we’ve all seen that MySQL can take you pretty far. On top of that, you could add multiple application machines connecting to the central database, and load-balancing, and all that rigamarole.”

And for data, you can write built-in services that export your data from the store.

Apr 11

Google App Recruiting Engine

Comic, Google, Tech with tags: 1 Comment »

Google App Recruiting Engine

I have to admit, that as someone on the inside it is nice to see the Google App Engine out there as a way to show some of the way in which we do things, especially scale.

One big difference you will see is the lack of a RDBMS, and instead with Bigtable, you build models that you can do cool things with such as Expando. Being able to add data elements as you iterate is very nice indeed, and beats SQL land, even with migrations and such.

Now when a new engineer comes to Google, they won’t have to entirely swallow a big red pill, as they may have gobbled a little of it already.

Apr 11

Reply hooks in Gmail; A case study in over-engineering

Tech with tags: , 7 Comments »

Before I start, I have to get it out that the thinking in question took place at 5am. I have been enjoying time in Europe, getting to meet various developers on the On Air tour that Adobe was kind enough to have me speak at. Since I was in Europe for such as short period of time, and due to a few work matters, I ended up staying somewhat on US time. This never quite works, and I think I end up with my body clock tick-tocking somewhere over the Atlantic. If I ever had to crash land on that tiny American runway on the side of a volcano, I am sure I would sleep fantastically at 10pm.

Anyway, to the matter at hand. These emails drive me nuts:

Title: Bob Harris via Twitter to me

"Some random content in 140 characters or less"

Bob Harris / bobh

follow me at http://twitter.com/bobh
reply on the web at http://twitter.com/direct_messages/create/bobh
send me a direct message from your phone or IM: D BOBH your message here.
turn off these email notifications at: http://twitter.com/account/notifications

You get them from Facebook too (thankfully they put the random content in there for some of the content), and many other services out there.

What is wrong with them? This is how they come across to me:

  • Hi, this is Twitter
  • I know that you are reading this in your email client
  • And here is some content to read
  • You very may well want to reply to this
  • I am going to tell you how to do so in many ways
  • But I won’t let you actually use email even though that is your context!

I got angry one night (after some dark and stormys, white russians, …) and wanted to fix it.

This lead me down the path of Greasemonkey. How about greasing up the wheels like this:

if the title of the message has / from Twitter to /
  when pressing the "r" key to reply or clicking on the reply button
    open up IM with "d [get reply to]" (grab /IM: D \w+/)
      now you can put in the message

The problem is that tying into all of the actions can be a pain, and it is a little bit annoying to be running this on every email. Oh, and what about the other sites! We don’t want to have to repeat this for every service that doesn’t care about me, do we?

After all of this over-engineering it seemed obvious that I shouldn’t be lubbing up the ape, but Twitter should handle this for me, and thus everyone that uses Twitter.

Instead of Twitter emailing me as <noreply@twitter.com> how about if a gentler, more Oprah-like Twitter greeted me as <pleasereplytoyourmate@twitter.com>. Then the email becomes:

Title: Bob Harris via Twitter to me

"Some random content in 140 characters or less"

Bob Harris / bobh

simply reply to this email, and the first 140 characters will be posted


follow me at http://twitter.com/bobh
reply on the web at http://twitter.com/direct_messages/create/bobh
send me a direct message from your phone or IM: D BOBH your message here.
turn off these email notifications at: http://twitter.com/account/notifications

The 140 character limit could be one of the reasons that they don’t do this? Stripping could cut the message up, but if this was the case, I would get the monkey out again and write something to help enforce the limit by letting the user know as they types too much.

Having nice extension points to email sounds pretty interesting to me too. What if Gmail adding to the Greasemonkey JavaScript API so you could add event listeners to events such as: getting new mail, pre-reply, post-reply, typing a subject, adding contacts.

Think of the possibilities.

But, I still want services to take email seriously as an interface to them, as does Dopplr and some others. After all, doesn’t Stallman use email to browse the Web?

Apr 10

Gears Future APIs: Resumable Uploads via PUT/POST

Gears, Tech 1 Comment »

Resume Upload

I have sat watching an upload happening, with a spinning “I am doing something” indicator, wondering how long to go, and if anything is actually really happening. Some sites do a better job and give you some feedback, but most still don’t.

If the Internet connection goes down and you are at 90% you cry out as you know that you will have to tell the browser to send it all up again.

This is why we want to implement Content-Range for HTTP POST and PUT:

The primary primitive necessary to enable large file uploads is the ability to specify byte ranges in POST and PUT requests. Byte ranges are already standardized for GET requests, and there are implementations of byte-range PUT used by WebDAV, but to our knowledge there has been little effort to use them for POST. Byte-range POST/PUT could be used to resume incomplete transfers, or to explicitly break transfers into smaller sized chunks. We propose standardizing byte range POST/PUT in a manner analogous to byte range GET.

The use of Content-Range headers in POST/PUT is allowed by the standard. Section 9.6 explicitly suggests the possibility of using Content-Range headers in PUTs: “The recipient of the entity MUST NOT ignore any Content-* (e.g. Content-Range) headers that it does not understand or implement and MUST return a 501 (Not Implemented) response in such cases.” However, such functionality has not been widely deployed, and therefore there exists no reference implementation or standardized semantics for how it should be used.

Use Cases

The use of Content-Range POST/PUT could be used to compose a chunked transfer protocol where a large POST/PUT was broken up into many small POST/PUTs. This would be similar to the protocols used by several current resumable uploaders. The basic algorithm for this model is to continue sending each chunk (probably in serial order) until it is successfully acknowledged. Note that if used for POSTs, some server-specific mechanism would be required to uniquely identify the the set of chunks constituting a single logical transfer. If it is necessary to accommodate server-side chunk loss (e.g. due to failures partially masked by replication) some additional custom protocol components would be necessary for the server to indicate to the client which chunks it had successfully received.

For some web applications, it may be desirable to upload a subset of a file to the server to reduce the latency between when a file is selected, and when the user can manipulate it in the application. A concrete example of this might be an image hosting website, which would like to upload the 64KB EXIF data segment from a JPEG file, so that the user can quickly view a thumbnail version of the file in the application while the rest of the image is transferred in the background.

It makes sense for this to work with the Blob API, adding:

interface Blob {

Blob slice(int64 offset);
Blob slice(int64 offset, int64 length);


Other Future APIs

Disclaimer: This is me rambling about APIs and tools that I would love to see in Gears, or the Open Web as a whole. Do you have ideas for cool Gears that make the Web better? Let us know!.