Jul 17

Opening up conversation on browser interrogation tools with Browser Memory Tool Prototype

Mozila, Open Source, Tech with tags: , , 9 Comments »

Do you sometimes feel like the browser is a black box? We are building richer and richer applications on the Web platform and this means that developers are running up against new issues to debug and test.

We feel like it is a great time to develop new tools that afford you the ability to look into the runtime to hopefully help you find a bug, or allow you to keep your application as responsive as possible.

Today we want to start a conversation about some of our thinking, with the hope that you will join in.

We have been taking a hard look at the tools landscape, and here is a presentation that gives you an idea of our thinking:

We will be posting more of our thoughts, but as you can hear, our vision for these tools is that they:

  • Are able to run out-of-process. We view out of process tools as the preferred way to observe the runtime because it enables us to somewhat ignore the Heisenberg uncertainty principle. If we are profiler the browser, having to deal with NOT profiling the profiler code can be painful. Also, we want to be able to use the same tools on devices. I would much rather point my desktop tool to a Fennec device, compared to trying to use the tool on the phone itself! This leads us too…
  • Enable cross browser experiences: Our lab doesn’t have the resources to develop deep integrations with multiple browsers, but we very much want to enable that. Since we are running out-of-process, we can document the communication API and many hosts can then be wired up.

memorytool

The first experiment in this vein is a stand-alone memory tool prototype that lets you poke around in the JavaScript heap. What objects are there? How many of them are there? Any dangling references due to closures or event listeners?

To kick this off we worked with awesome Mozilla colleagues such as David Barron and Atul Varma which enabled us to spike down to the bare metal of the browser.

We ended up with an architecture for the tool that contains these components:

Firefox Add-on

A special Firefox add-on installs a binary component that gives us access to the low level JavaScript heap. This gives us a simple API with methods that allow you to enable profiling, get the root objects in the heap, and get detailed information on the objects themselves.

Firefox Memory Server

The current consumer of the core API is a min server. Once activated, the browser freezes and your only option to interact with it is via this server. It exposes a simple socket API with URLs mapping to the high level APIs. For example, you can access /gc-roots to get the root object ids. Or you can ask for details on an object via /objects/XXX where XXX is the object id you are inspecting. When you are done, you access /quit and the browser is unfrozen.

All of these APIs support JSONP which is how we get the data back into our main application.

NOTE: Currently, the server lives within the add-on itself (at chrome://jetpack/content/memory-profiler-server.html) but eventually this will migrated to a Jetpack.

Memory Tool Ajax Application

The main application itself is a simple Web application that can be run in any browser (not just Firefox!) After you have installed the Firefox Add-on, and turned on profiling via the Memory Server, you can visit the tool. Currently, after you connect, the tool gets a dump of the root object for the first tab in the browser (not including the memory server tab). You will see the meta data associated with the object, and you can click on any of the data elements that have their own memory id (memory locations are integers with 9 digits). This tree view lets you poke around the heap.

If you want to aggregate the data, you can click on the “2. Dump Heap” button, which goes through the entire heap (which can be big!) and aggregates all of the objects for you. If you see a massive number of objects of a particular type, this could be a flag!

Enough talk, lets see it briefly in action:

The tool is very early stage and changing constantly. However, it is all out in the open. You can grab the open source pieces:

As is always the case with Mozilla, and Mozilla Labs, we want to get ideas out into the community as soon as possible. This tool is very much alpha, and the goal of getting it out in the wild is to start a conversation about tools like these.

What tools in this area would help your job as a Web developer? We are all ears, and would like to share our dev tools mailing list / group as a good area to share ideas.

What about Firebug? This particular tool freezes the browser, and since Firebug is in-process right now, it wasn’t a great fit. However, we very much want to take this kind of work and get it into Firebug at some stage. We just aren’t at that stage yet!

On our side, we will be engaging with the Mozillans who truly grok the JavaScript (and entire browser) internals to see what interesting data we can expose to developers. We have found that the Firefox team has already added a lot of the infrastructure there, and now the task is to work out what will be useful, and how can we best report it.

We have plenty of ideas too. A wish list could contain:

  • Short term clean up (fix the backend code that interfaces with the heap, abstract out the service into a Jetpack, and make sure we are using the correct APIs, and get these APIs added where appropriate)
  • We want to visually add the graph to the object dump, so you can really understand what you are looking at. It will probably look something like this:

    memorytoolgraph

  • We have wired up Bespin, and we will suck out the source code from functions and showing that inline to the tool itself. There is much more to do though, and we want to find out what you need.
  • More profiling info: break up buckets of memory (images, js, DOM, etc)
  • Great way to see who is reference whom (memory leak detection)
  • Garbage Collection: When and how long are collections occuring?
  • Granular filtered profiling: Profile this event and measure every event-of-interest from the start of navigation to the present e.g. DNS & TCP connections, page header parsing, resource fetching, DOM parsing, reflow, etc.
  • Web Worker thread monitoring
  • Have a profiling mode that gives you data without having to freaze the heap, and only when you need to do a deep dive do we get to that.

We have learned a lot as we created this prototype. Atul is going to write up his experience, and we will continue to talk in the open about how we take this prototype and your ideas to the next level with browsers.

Update

Atul has posted on his experience with SpiderMonkey and how the JS Runtime works. Nice in depth stuff. He also created PyMonkey “a Python C extension module to expose the Mozilla SpiderMonkey engine to Python.” which is crazy cool.

Apr 09

What if you, and people like you, volunteered for the police?

British, Open Source with tags: , 1 Comment »

When thinking about Open Web evangelism my thoughts led me to some advertisements that I saw on the Tube when I was recently in London. They had content like this:

specialconstables

You can now potentially volunteer for the metropolitan police. At first this seemed a little strange, but it quickly made a ton of sense to me.

I remember how I felt about the police as a kid. The bobby-on-the-beat police. The cops were part of the community and were seen as truly helping out in many ways.

Now fast forward to moving to America. Here I fear the police. If I see a police car pull someone over I get goosebumps and raised hair on my arms.

London is a huge cosmopolitan city, and with that growth it has also seen an increase in crime. If you walk past New Scotland Yard you see a chap with a machine gun. Far from the local bobby. The problem is that you lose your grip on the community around you. If you don’t know what is going on and the vibe you have less information. You probably have less informants too.

So, how could the Met go about getting a closer connection to the people again? How about inviting them into the fold. Open the curtain so those people offer a valuable service and see how the force works. On the flip side, when you interact with a cop now, it could actually be your kids teacher! If the system works out you feel differently towards the copper crew again and the community gets more close knit.

I am obviously not close enough to the scene in London, and wonder how well this program will do / has done (anyone know more?), but in general, I love it. Putting people first.

Mar 25

Canvas 3D, standards, and where

Mozila, Open Source, Tech, Web Browsing with tags: , 6 Comments »

I was excited to hear about the Canvas 3D effort that Mozilla, Google, and Khronos are engaged in (and others can too of course).

Khronos is the group being OpenGL, and thus a good set of folks to be involved in the Canvas 3D approach that is in the mould of “OpenGL ES like for the Web” in that it is a low level API that others can build on top of. Others have played with higher level “Games” APIs, or virtual worlds, and this is not the same. It is a primitive that will enable people to do interesting things that sit on top.

I noted Ryan Stewart (friend and great chap) weighing in:

So it’s unfortunate to see that even the browser vendors have given up on moving the open web forward through standards. Whether it’s the WHATWG versus the W3C or the trials and tribulations of actually implementing HTML5, things are very broken and everyone is moving on regardless. I don’t blame any of them, but it doesn’t seem like it’s good for web developers.

Then, I saw John Dowdell, also of Adobe, talking about standards.

I already talked about how many of the leaps on the Web haven’t started in the W3C (and rarely start inside standards orgs first) and rather come out in browser implementations that are then shared. Think XMLHttpRequest. Think Canvas itself from Apple! Do something well, see people use it and get excited about it, and then get multiple implementations and standards. Everyone wins.

John’s wording is interesting:

“But Mozilla’s proposal relies upon further proprietary extensions to the experimental CANVAS tag”

“And you’d lose the moral fulsomeness of the ‘Web Standards for The Open Web!’ pitch when focusing on your own proprietary alternatives to existing standards.”

Look at how browsers have done some things recently. Take some of the new CSS work that Apple started out. When the Mozilla community liked what they saw, and had developers demanding, they went and implemented it too. When you see WebKit and Gecko doing this kind of work it is particularly Open because the projects are open source and you can check them out (well, if you are allowed ;) How great is that, to iterate nicely in the open…. and then when ready we can drive into the standards bodies.

Back to the Canvas 3D work. Having Mozilla, Google, and Khronos work on this in the open seems pretty darn good to me. This won’t be hidden behind a proprietary binary that no-one can see. There will be some work in marrying the world of OpenGL ES and JavaScript as nicely as possible, and there will be plenty of room for the jQuery/Dojo/Prototype/YUI/…. of the world to do nice abstractions on top, but this is good stuff. This is more than just throwing out an API on top of a proprietary system, and I can’t wait to see what comes of it all. Want to get involved? You can in this world.

Mar 18

Why Open Source is amazing; The story of the Quick Open Bespin feature

Ajax, Bespin, Open Source, Tech No Comments »

Going from hacking away on Bespin before our launch, and now watching it 100% out in the open thanks to open source has been a fascinating transformation. Building a community is so much harder than hacking on code, and very different constraints appear. The past of “get a feature done” is changed to be “make it easy to get features done”. We still have a lot of work to do on extensibility, but it has been fun to see what people have already been able to do.

We have had experienced and junior folks pick up Bespin and help out, and I am trying very hard to strengthen a tough skill… delegation. Instead of picking off some bugs, I try to document them better and explain them so anyone in the community can pick them off. Sometimes it would be easier to hack up a quick fix compared to walking someone through a set of patches. However, that doesn’t meet the goal of getting people fishing away on our code and scratching their itches. Ben, myself, and our team doesn’t scale out to the size of developer tools groups at other huge companies, so we need to do what Mozilla does best….. build honest community. I am having a great time doing just that! The early contributors have been amazing already.

There is one recent experiment that I wanted to share. I was thinking about hacking on a key feature…. the ability to quickly search for and open files. This is the Apple-T feature in Textmate. I use it in the same way that I use Alt/Apple-Tab, or Apple-~ to move around. It is a core way in which I move around my projects. Instead of going right into code, we put together a mockup of how the feature could look:

Then, I spent a bit of time on the general design document itself to explain the feature, both from a use case / design angle, and on the “high level” coding side. I tried hard to give enough detail to explain the feature, while still allowing an implementor the ability to be creative and come up with their own ideas.

A few days later, Julian Viereck (a contributor who has already been incredibly helpful and generous with code) stepped up to the plate to say he would implement this, and in short order with the help of Kevin Dangoor building the server side infrastructure (index to search to find out the file names in this case), they had built a solid first version of the functionality.

It’s phenomenal, and I am so grateful to Julian for putting in time to make it work so well. Not only did he write the feature, but he also create a new Thunderhead component head to allow for moveable windows. Very cool indeed. Here are his thoughts on the implementation:

As described in the DesignDoc Quickopen is a window popping up in the editor to let you choose a file you want to jump to and perform some work on with the editor. This allows you to open a other file without going to the dashboard and back again to the editor => you can stay longer in the editor and have not to reload the whole page just for changing the current file ;) This is a quite important feature when it comes up to work on a “real” project in Bespin as you really stay on the work itself :)

To open Quickopen press ALT + O in the editor.

Quickopen: How it works?

When the user fires up Quickopen the first time it shows a list of the current opened files in the project (quite the same as the Open Session thing in dashboard, but just for the current porject). When tipping a searchkey a request is send to the server and a result list back to the user and displayed.

Kevin did the backend stuff. He says “the server is using a *really* stupid file cache to make searches zippy”. Well, it’s really zippy ;) But here is a list of things I would think that could be improved:

a) the seachindex is not updated at the moment. So once deleted a file, it is still in the seachindex. Try to open this file with Quickopen will cause a strange behavoir. The best solution would be, to have the searchindex in sync with the filesystem, but well, thats maybe a bigger deal. For the moment it would be great to have a command for paver like “paver updateSearch -u <username>”.

b) the search results list also files that cannot be opened by Bespin (e.g. image files…). I would not like to see the backend making a choice which files the user should be able to seen in this result list and which not by certain rules. But I think let the user make that choice is a good point. I’m for example not interessted in the image files BUT also not interessted in all these .py files and these stuff. There should be a new user setting to make this adjustments. But at the moment I have no cloue how this setting has to look like? A regex, or something like “excludeFiles= *\.js|.html|.css” for excluding all files except js-, html- and css-files?

c) the search should remember how often the user picked a certain file from the list and put this file more above in the resultlist.

Other ideas? These is work to be done on the backend. I never wrote one line of python and have not looked really at the backend stuff, so maybe someone else should take over this part :)

But when making so often a switch between the files it also comes up to add other things: the editor should remember the mouse position on the files as the user jumps between them. This makes it easier to continue working on the files. For this I thought about adding a new kind of “settings” that stores such data like mouse positions, window positions and such stuff, but I cannot come up with a name for it.

What do you think?

BUT: There is even more new stuff: th.window!

When implementing the Quickopen window I was thinking: “why is there no such class in th”? Well, there it is: th.window!

th.window brings up a window in the browser, with the same border and window bar as the one used by Quickopen (well, Quickopen uses th.window already, so the Quickopen stuff in quickopen.js is a good starting point to see how to use th.window ;)). Having a th.window class was something Malte asked for and I hope other stuff will provide from this new class as well :)

When creating a new th.window object, a new <div> with a <canvas> is insert. The canvas is used for the th.window::scene, to which is the th.WindowBar added automaticaly as well as the user panel; the place to put in the things that should stand within the window. Some basic functions are added to the window: you can drag them around on the screne, there is a close button within the WindowBar, the window closes automatically the user clicks outside the window (is this prefered in all way, I’m not sure) or press the ESCAPE key, there is a toggle, move and center function. Just the basics, but a good point to build on!

Thanks so much to Julian and the other bright sparks that have made Bespin a fun project to work on. There is so much to be done, but hacking on a tool that you actually use is compelling, so I can’t wait to see more!

Mar 16

Embedding and reusing the Bespin Editor Component

Ajax, Bespin, Open Source, Tech 26 Comments »

From the get go, the Bespin project means a few different things. One of the components is the Bespin Editor component itself. We have already seen people taking that piece and plugging it into their system. For example, the XWiki integration.

The problem is that we (Bespin team) haven’t done a good job at making this reuse as easy as it should be. That has now changed with the edition of the bespin.editor.Component class that tries to wrap up the various parts and pieces that the editor can tie into (settings, toolbars, command lines, server and file access) so you don’t have to think about them.

A common use case will be embedding the editor itself, and having it load up some content, maybe from a container div itself.

I created a sample editor to do just this:

editor component

There is a video of this in action, comically in 2x speed for some reason on Vimeo :)

Since this is a sample, there are things that you can do, that you probably wouldn’t in your case.

To embed the editor component you will be able to simply do this (NOTE: We haven’t deployed this version to production yet, so for now you need to load up Bespin on your own server, sorry!):

<script src="https://bespin.mozilla.com/embed.js"></script>
 
<script>
    var _editorComponent;
 
    // Loads and configures the objects that the editor needs
    dojo.addOnLoad(function() {
        _editorComponent = new bespin.editor.Component('editor', {
            syntax: "js",
            loadfromdiv: true
        });
    });
</script>
 
<div id="editor" style="height: 300px; border: 10px solid #ddd; -moz-border-radius: 10px; -webkit-border-radius: 10px;">var foo = "whee";
 
    function flubber() {
        return "tweeble";
    }
</div>

First we read in the embed wrapper code, which relies on Dojo (so Dojo has to be loaded first).

Then we create a component passing in the HTML tag to inject into, and options which in this case tell it to use JavaScript syntax highlighting, and then load up the editor using the value in the div that we are injecting into.

At this point the editor is ready to go. You can focus on the puppy and start typing, but chances are you want to access the editor text at some point (for example, read from it and post it up to a form).

To mimic this, we have a textarea that we can copy contents into (editor.getContent()), and then send it back to the editor (editor.setContent(contents)):

function copyToTextarea() {
    dojo.byId('inandout').value = _editorComponent.getContent();
}
 
function copyToEditor() {
    _editorComponent.setContent(dojo.byId('inandout').value);
}

The example also shows how you can change settings for the editor via editor.set(key, value).

There are more features we should probably put into the editor, such as automatically syncing to a hidden textarea with an id that you specify (so then a form can just be submitted and the backend gets the right stuf).

What else do we need?

Feb 13

Launching Bespin; Feeling light as a cloud

Bespin, Mozila, Open Source, Tech with tags: , 5 Comments »

Talk is cheap. Shipping code is important. I have always felt that to be the case, so I always look forward to the first time that I ship something. At my latest adventure I got to do that yesterday.

It was a fun ride to go from the technical challenge of “can this be done” to having the experiment out of Mozilla Labs and into the hands of the wider community. One of the reasons I am so excited to be at Mozilla is that I get to develop in the open. You can watch our source code repository and see a community on irc and join us in our news group. Running an open project well is part art, and is incredible hard to do well. I have deep respect for those in the open source community that have succeeded. I am really looking forward to that challenge with Bespin and beyond. I want to make sure that we are truly transparent (no hidden agendas and back channels). I want to raise the profile of anyone who contributes to the project so people really know who the people behind Bespin are. I want to try hard to get designs out there early in the process so decisions can be shared, but I want to make sure that a vision drives the project forward.

Foolish chaps and companies have come to me in the past thinking that open source will be a silver bullet for “getting other people to do our work.” Those that have been involved in open source know that it isn’t the case. It is often more work. But, it is worth it. I have no doubt that the community that we hope to grow will come up with amazing ideas and contributions. I am humbled by the contributions even a DAY after launch. I am stunned that people think our experiment is worthy of their time and thought.

The wind is at my back, but I know that announcing Bespin is the beginning and not the end. The birth of the project. Now we get to see if we can have the kid grow up.

Not sure what Bespin is? Here is some info from the announcement and more. Thanks again to all of the kind words from people across the Web. It means a lot:

Bespin proposes an open extensible web-based framework for code editing that aims to increase developer productivity, enable compelling user experiences, and promote the use of open standards.

Based upon discussions with hundreds of developers, and our own experience developing for the Open Web, we’ve come up with a proposed set of features along with some high-level goals:

  • Ease of Use — the editor experience should not be intimidating and should facilitate quickly getting straight into the code
  • Real-time Collaboration — sharing live coding sessions with colleagues should be easy and collaboratively coding with one or more partners should Just Work
  • Integrated Command-Line — tools like vi and Emacs have demonstrated the power of integrating command-lines into editors; Bespin needs one, too
  • Extensible and Self-Hosted — the interface and capabilities of Bespin should be highly extensible and easily accessible to users through Ubiquity-like commands or via the plug-in API
  • Wicked Fast — the editor is just a toy unless it stays smooth and responsive editing files of very large sizes
  • Accessible from Anywhere — the code editor should work from anywhere, and from any device, using any modern standards-compliant browser


View Introduction to Bespin

Credits

There have been a lot of people that we can thank for getting us out there today. Firstly, our new team of Kevin Dangoor and Joe Walker. Secondly, the great new colleagues that we have at Mozilla. Our Labs team members have been inspiring. We are building on the shoulders of great work. We are not only working closely with the Ubiquity team (Atul Varma, Aza Raskin, Jono, and others) to make sure the command line and Ubiquity are integrated, but we use Atul’s code illuminated to house the documentation for Bespin code. The Weave team has provided guidance for a future where Bespin data can be housed in their scalable infrastructure, which excites us. Whenever we chat with a Labs team we see places for integration, and we can’t wait to get there.

We care about design, and have been fortunate enough to have input from two great designers: Sean Martell and Chris Jochitz.

Other Mozilla folk have helped a lot too. You will notice that Bespin makes heavy, heavy use of canvas. Vladimir Vukićević has given far too much of his time to let us run through ideas and profile the canvas performance. We have also already seen contributions from outside of Mozilla. A few issues have been put into Bugzilla by beta testers, and even code patches (for example, thanks to Ernest Delgado for his canvas skills).

We have only just begun. We really wanted to get this tech preview out as soon as we could to embrace the community and experiment heavily. We hope to have your name in the credits soon!

Get Involved

There are many ways to get involved with the Bespin project and the Developer Tools lab. You could start by giving us feedback on the product (via comments, in our group, on irc in channel #bespin, or in Bugzilla).

Have a feature you would love as a developer? Fancy sharing a design concept? (We like those). We would love to hear from you on all fronts, from ideas to design to code. One of the reasons that we are excited about Bespin is that it is written for the Web platform, on the Web platform. This means that your Web skills can be applied to your tool. Want a nicer syntax highlighter? With that we had support for a version control system that we don’t support? Wish that there was interesting Python support? Help us build it!

Bespin has been built with extensibility in mind. We want you to be able to tweak your tool. Bespin Commands are just one example. Would you like to embed the Bespin editor into your own project? We want to enable these kinds of use cases.

Jul 24

License the content that goes with the code; Google Code supports Creative Commons

Google, Open Source, Tech with tags: , , No Comments »

As you can see, you can now attribute the content that goes with your open source project on Google Code.

This is a piece of news that won’t make TechMeme, but I believe it is actually a big deal (even more so than Robert Scoble blogging about blogging).

We often think of opensource projects as code. We think about the licensing of that code, and how important it is. Tell a developer GPL vs. BSD and they know the general rules.

That is great, but few projects only contain code. What about the artifacts? What about documentation, and samples in articles, and screencast movies, and protocols and formats? A good project will clearly define that area too, but the open source licenses don’t fit.

On Google Code, you can now select a content license that fits your project. A small thing, but an important one, as you yet again tell all of the users and developers of the project exactly what the rules are…. explicitly.

May 05

Being Open is hard, as we have seen this week

Open Source, Tech 6 Comments »

Open

The last week or so has been a stark reminder of how hard it is to do “Open”, and how the term itself doesn’t mean anything. There are many shades of grey when it comes to open. Let’s take a peak at what happened, and then try to come up with some tools to help us communicate what we truly mean.

Ext JS

This one has been talked to death. At its heart, the project originally had a license that was hard for people to understand. The “LGPL unless” clauses were unclear, and many thought untenable. At best, it was very cloudy.

After some people discussed the issues, mainly in private, with Jack we saw a new Ext 2.1 release until the GPL license (no more special clause). Some were unhappy about this, mainly because it was also unclear that the team actually said “we were wrong” and that these folks had a right to fork the project due to LGPL. Jack quickly came out with an exceptions clause and people are trying to iron this out.

SpringSource Application Server

SpringSource announced a new product for JavaOne, an application server. The Spring Framework itself is licensed under the Apache license, yet the new application server is GPLv3.

This caused a bit of a stur, as people were unclear of the difference, and I read a few posts saying that everything had been changed (which is totally untrue).

Marc Fleury came out of the biotech wilderness to comment on this all, claiming that this is just a packaging of the old stuff with a new license to kick into gear a new business model:

So voila, we now have a box drawn around an OSGi kernel, the Spring framework and Hibernate/Tomcat, and it has a name: it’s an application server. It is the same thing you had yesterday for free, except it is now under the GPL and a proprietary subscription license.

Rod and the team consistently argued that there is actually a lot of engineering here. One little embarrassing moment was in one of the TSS replies. Both Rod and Adrian replied to a message with the same boilerplate response:

Creating an application platform that makes the benefits of OSGi available to end users was a huge investment for us. There’s a *lot* of technical innovation under the hood which won’t be immediately apparent but which enables us to make a generational leap. If we’re giving that technology away in open source, we wanted others who build on it to also give away the results in open source.

When the community saw that, they thought that this was very much canned.

Adobe SWF / FLV Binary Specs

Adobe made a big PR announcement around the opening of Flash and more.

When you come out and say “now we are Open” it gets everyone excited. Their stock bumped up, which may or may not have been related to this (a lot of stocks went up at the same time).

I was ecstatic to see the news on the wire, because I think this will be great for the Web. If we could get Flash to be part of the Open Web, I would love to see it as a win-win.

Unfortunately, when I looked into the details, there wasn’t much to see. The claim was that the FLV/SWF/F4V binary formats will be Open, and there will no longer be the restriction that said you can’t RUN the code.

The problem was that there was no license to go along with this claim, which means that we can’t actually do much with it yet. Adobe isn’t more “Open” today than it was the day before the announcement. This will hopefully change very soon when we actually see the license, and hopefully see even more.

Time to learn

What did I learn through all of this? All of this licensing lark is about clarity and communication. I actually like all of the three parties here. I consider Jack, Rod and many Spring-ers, and lots of the Adobe folk as friends.

When it comes to licensing:

  1. You need one, please
  2. Please make it a standard one
  3. Be careful which one you choose first, as changing it later will cause a lot of issues.

All of the concerns have been due to communication through licenses and on top of them. The license is a great starting point, but it isn’t enough.

So, what does it actually take to be Open?

  • 0 points: Say you are open
  • 10 points: Choose an OSI license
  • 20 points: Define the governance of the code, or the protocols / specs. If the spec gets a license that is great, but how does it get changed? Does Adobe hold all of the cards still? Can others participate? For code, who participates? Can anyone patch? Can you, and if so how do you become a committer? At the core: HOW ARE DECISIONS MADE
  • 30 points: An reference implementation under an open source license
  • 40 points: Where does the IP stand? Did you donate it to Apache or some other foundation? For an example, you can see Exhibit B: Patent Non-Assertion Covenant for the OpenSocial Foundation Proposal

Now, this can be a gradual thing. It is common to start at one end and then slowly move down the stack. It took some time to get the OpenSocial Foundation in place for example, and everyone involved is still working out the governance model.

Also, no answer is “right”. You can put code out there as open source and hold all of the cards. That is your prerogative. It is so easy for us to sit back and say “Oh come on Jack, just put it all out there!” or “Spring? GPLv3? Come on!” or “Adobe, just open source the entire Flash VM!”. These decisions have huge business ramifications, which huge potential consequences. You can understand how it is hard.

All we can really ask is to have the clear communication. Just be honest with us. Be clear with your intentions. The ramifications really do effect us too. I may get more involved in a project that isn’t just run by one company, where they can change things on a whim. If the purpose for using open source is more than the insurance of “if they do something I can fork it” then this stuff matters hugely. Some are in the game for insurance, but in general I think that people like to also get behind causes. They want to put energy into something they believe in. As soon as this happens your project has a part of us in it, and you need to respect that.

I am really excited to see a day though where SWF/FLV does have a clear license, thoughts on governance and how the community can get involved there, and frankly, guidance from Adobe on why they are doing this. Based on that information, people will get more or less excited. Others have already reverse engineered the Flash formats, and a Flash player that lives in the wild under full control of just Adobe means a certain kind of “Open”… one that isn’t very. I have belief that over time, the need and desire will push Flash over the edge. You only have to look at where things have gone with Flex, Tamarin, and other open source projects at Adobe. Macromedia is winning, and over time Flash will surely be open source, especially as Silverlight gets better.

The Peeps Don’t Care

Finally, I know that 99% of the developers out there may not even care, let alone users. There are open source wonks who like to argue about licensing and methodologies. As I watched the John Adams HBO series, I felt a little like those fine chaps arguing over the finer details of things. Many of the people didn’t know what was going on there, or why a particular Article was written the way it was. But, they had drastic implications for the people. I think that the same can hold here for some of the projects.

We need to pat backs when they deserve it, and hold the feet to the fire a little when the details don’t match the rhetoric. I can’t wait to see a better software world continue to grow over time.

Apr 27

Ext JS: A reminder that you are not alone

JavaScript, Open Source, Tech 1 Comment »

Alone

Every now and then, normally when talking to a libertarian, I think about how we are actually all connected to each other. It is impossible to sandbox yourself from society which leads me to conclude that I need to embrace it and do what I can to work out what kind of society we want to be.

With the current Ext JS debacle, you get reminded of how connected your project and business are to other people. Just because you own a company, doesn’t mean that you control it. When I think about my own company, Google, I realize that the most important currency is user trust. It doesn’t matter how many PhDs and great technology releases we have, if we ever lost some of the trust. I think that Google has earned its reputation, but all it would take is something that goes against what we have stood for so far, and we could lose it just as fast. I actually like this fact, as it keeps us honest.

It is a little like your tennis ranking. A rolling year of past performance is what really matters here. It doesn’t matter if you won that grand slam one year and one month ago. This is why every tournament matters. A bad showing loses points.

Of course, with user trust it is a lot more nuanced, and the graph is more exponential (the longer you go back in time, the less it matters).

Anyway, enough side tracking. When you have a software project that is a library for developers, your end users are those developers. If the project is open source, then there is a clear communication of the rules through your license. This is why open source licensing is so important. It allows you to have a simple contract saying “this is what you can and can’t do”. As a developer I can see GPL, BSD, Apache, and I know right away what kind of community this is, and how I can play a role. It isn’t about one license being better or worse than another. It is about communicating rights.

If you are fortunate enough to gain a real community, where other developers are participating, then the game starts to change. Now you have people who are invested in your project, maybe building on it for you, or evangelizing it, writing documentation, or creating their own business. At this point you really start to see what kind of project it is going to be, above and beyond the source code licensing. This is the Open Community side of a project. It can range from: only people who work for company X contribute in anyway, to: active commiters from all over the Web. This paints a picture of the project as a whole, and will have large effects on project, including who uses it. This is all about governance.

This also comes into play in other ways. When you think of Apache, or the Dojo foundation, you know about the legal protection that comes through the process. You know that everyone has signed a CLA, and that the history of the code is clean and well known. This has a huge effect on getting large companies into the game (This is why companies like IBM and Sun are so involved in Dojo IMO).

Now that you have users of various stripes, and a community with varied roles, you also have connections through out. If you then change the open source license for your project, the contract in the community has changed. When you make a change you not only need a good reason, but it has to be transparent, and you obviously have to get all of your ducks in a row to even be able to pull it off (e.g. depending on the change you may need every author of a line of code to get involved).

With Ext JS, there was a strange situation. The original license of LGPL-ish was very confusing, which lead to a confused community. Some kind o change was required, and clarity needed to be brought in. Unfortunately, it seems that the move to GPL has caused more chaos and confusion. Developers who poured a lot of time into the community (e.g. by creating GWT-Ext) are upset. The chaos can rip the community apart and you end up with a true lose-lose. Jack has spent far too much time and grey hairs on this one, instead of writing great code and growing his business.

So, it acts as a reminder, that the community is all connected. Everyone may not be equal, but make sure that communication is incredibly clear at all times to make sure that something like this doesn’t happen.

Apr 15

Consilidation in the Open Source Java Stacks?

Comic, Java, Open Source, Tech with tags: , , 2 Comments »

Dealing with Java CIOs

I was talking to a friend that does a lot of work in the realm of Open Source Java. He is someone who talks to people high up in the chain, and discussed how a lot of the CIO folks are getting a bit confused with the offerings. They had gotten used to JBoss. And, now they get Spring. But, they keep getting bombarded with more things. Next they had Mule, and Groovy.

Get some of these guys in a discussion about Mule vs. ServiceMix and they froth. Spring did something smart via Spring Integration, but maybe it is time for some consolidation? SpringSource + MuleSource + G2One? SpringyMule?