Thursday, December 4, 2008

The Secret of 2.0

I've got a separate post coming specifically on Twitter and it's ilk, but there was a phrase in this O'reilly post that really hit me.

"What's different, of course, is that Twitter isn't just a protocol. It's also a database. And that's the old secret of Web 2.0, Data is the "Intel Inside". That means that they can let go of controlling the interface. The more other people build on Twitter, the better their position becomes."

I sort of knew this, but it took this phraseology for me to really get it.

Two things immediately came into my head: 1) I need to ditch tiddly wiki and get some database backed blogging platform (or write my own). 2) Our typical clientele fundamentally does not understand this.

I'm starting to wonder why our community doesn't take a more Google (or even Amazon) like approach to their data. Aggregate it, put out some APIs and see what people do with it. Concentrate on availability, access, and environment/platform and get out of the way. Things like the DIB and DDMS and the rest of the standardized, top-down approach are starting to seem really heavy handed to me.


Jeffrey Erikson said...

Yeah, two things come to mind:

1) The O'Reilly quote is very similar to something that I think Paul Graham wrote (or is at least similar to something else I've read recently). The most successful apps these days are the ones that the app designers didn't even build up; they're the ones that have created an API people can use to design around them. Look at what I'm doing with GMail and Tracks, for example.

2) The community . . . I think that's why QueryTree was such a disappointment to me. I basically wrote it to be an API into the data. It got turned into just another query mechanism. To give people an API means you've assumed they're smart enough to use it. I don't think the community has those kind of people (lowest bidder) or that kind of motivation (do enough to meet the SoW and save the rest for another contract).

But it's *still* a good idea :).

Mike said...

You're right ... the typical procurement cycle and the day to day mustgetxyzdone requirements don't lend themselves to sitting back and watching things happen organically. The more I think about it, the more I think there should be some high level agency that acts as the clearinghouse (the google) and all other sub agencies work off of their data. Again, with the focus being on availability and ubiquity. Then the downstream agencies could still let whatever contracts they wanted to build whatever capabilities they wanted. The neat thing, though, would be that you'd be building a qualification of sorts. "Are you familiar with the XYZ community Data APIs?" would be akin to "Have you worked with Google's Data APIs or Amazon's EC3?". That way when you switched jobs, or contracts changed hands there'd be some amount of portability of skillset. It just seems like the problem is one of scale. Agencies keep trying to solve it at too low a level. I say let the agencies keep solving their particular work problems without having to worry about the data part.

Jeffrey Erikson said...

I'd love to see that, but, of course, as soon as you have one "clearing house" of information, you get people like the EFF bringing lawsuits. Lots of danger there in one agency holding *all* the data. It would definitely make it more efficient, but I also think it would scare a lot of people. But then, people have lots of concerns about Google holding all the information it does, too.

Jeffrey Erikson said...

Speaking of centralized data repositories . . . just launched:
Amazon Public Data Sets. Apparently, Amazon's trying to get people to upload their large data sets for public consumption.

Mike said...

yes ... I link to that in the original post :)

Jeffrey Erikson said...

What I get for not following links ;).