Now blogging at diego's weblog. See you over there!

broken mac

apple_broken_gel.jpgSome time after I got my Macbook, a CD got jammed in its DVD drive. Blast! As it happens, we had a refurb Macbook at the office so to avoid spending a week or more without it while Apple fixed the drive I swapped hard drives and continued using the other Mac. Things got busy at the office and my Mac stayed with the jammed CD for a few weeks -- since I had a replacement there wasn't a reason to rush it.

Now, a few weeks ago, the Macbook started hanging. First, it wouldn't come out of sleep. Then, it started locking itself up even when waking up the display (i.e., it was connected to the adapter and so only the display turned off). It started doing this once a day. Then twice a day. Then it started to lock up mid-use, sometimes even a few minutes after rebooting.

Don't get me wrong, when I talk about locked up I don't mean I could Force Quit whatever was dead. I am talking about a complete OS-wide lockup that left the colorful spinning thingy spiraling as if the machine was trying to hypnotize me. The only solution was to hold down the power button for 5 seconds and do a hard-shutdown that way.

Yup. Not good.

A couple of times, both Safari and Firefox (not simultaneously) had gone out to lunch and for some reason were consuming 100% of CPU (usually just one of the cores, but that seemed to be enough for the machine to stop responding). That was my leading theory until lockups happened without the browsers loaded. Then I decided that the refurb had a circuit that was berserk. So we got the original Mac fixed and I swapped the drives again. Yes! Now everything would be alright. All the original parts were reunited.

No such luck. After a few hours I was back in lockup land, and if anything things were getting worse. I couldn't understand what was going on -- I use exactly the same software on my Mac Pro at the office and it's never locked up like this. I looked online and found references to lockups due to a corrupted "sleep image file", which if deleted could restore sanity.

So this morning I decided that enough was enough, and that if I was going to try anything else radical now was the time. I didn't want to try the image file thing since sometimes lockups could take hours, and I was in no mood for waiting. I backed up everything (easy process between Apple's Backup software and the fact that I've centralized my data in my home directory for years), and I reinstalled OS X.

I did a clean install, which took about 2 hours total, mostly unattended. Most of the install didn't require my attention, and installing the OS updates at the end, while still a bit of a pain, was a single-step process, compared to the multi-hour nightmare that is Windows Update right after you do a clean install of XP or even Vista.

Another big difference with reinstalling a machine, compared to Windows, is restoring the apps you use. I just went to the Applications directory, TARred each ".app" directory that I wanted and copied it off to the network server, then uncompressed and moved back in each dir to the newly installed copy when it was done. The whole thing took about 20 minutes (then of course, I had to re-add all the license keys and such, but I keep good track of those). Windows would have required endless hours of switching CDs and DVDs, one after another, until your setup was complete. I know. I've done it.

Anyway -- the machine now appears stable -- it hasn't locked up all day. We'll see if this continues, if not, I'll try the sleep image file thingy.

I feel like I've gone through some sort of twisted rite of passage. OS resintalls! Looking forward to the time when I have to reinstall the software on my coffee table. :-)

Categories: technology
Posted by diego on June 24, 2007 at 10:03 PM

the next blog you'll add to your feed reader...

...is Marc's.

Yup. Go read, and stop wondering what I'm talking about. :)

Categories: ning, soft.dev, technology
Posted by diego on June 3, 2007 at 9:16 AM

ubuntu server 7.04's paltry default packages

There are some basic packages that the basic distro of Ubuntu Server (as of 'Feisty' 7.04) does not include. I was just documenting a bit the sequence of apt-get commands I used right after the install was done:

apt-get update
apt-get upgrade
apt-get install ssh
apt-get install lynx
apt-get install links
apt-get install vim
apt-get install gcc
apt-get install make
apt-get install sun-java6-bin
apt-get install sun-java6-jdk
apt-get install subversion
apt-get install smbclient
apt-get install smbfs

The update and upgrade commands are to update apt-get's lists and then upgrade packages that were just installed from CD, respectively.

Some of these are perhaps a bit less common -- smbfs maybe. But vim? gcc? make? Really? Not to mention ssh. The client of SSH comes in pre-installed, but you have to install the server.

I imagine there's some weird reason that has to do with copyrights or encryption, or the copyrights of encryption, but it's still a pain. Especially if you forget about doing it...

Categories: technology
Posted by diego on May 20, 2007 at 5:30 PM

jpc: holy emulators batman!

multios.gifJPC is a pure Java emulation of an x86 PC with fully virtual peripherals. You can go to their site and run the applet demo, which runs FreeDOS and then lets you execute various classic PC-DOS games such as Lemmings or Prince of Persia. And it supports protected mode, so you can run Windows 95 and --gasp!-- Linux.

Because it's an emulator and not simply a hypervisor, you can run it anywhere in which a Java 5 or higher JVM can run.

Mindblowing.

ps: in the same vein, check out this Browser emulator which simulates the experience of older browsers within your... browser. Right.

Categories: soft.dev, technology
Posted by diego on May 17, 2007 at 2:42 PM

at javaone tomorrow!

j1logo.png

Martin, Brian and myself will be at JavaOne tomorrow presenting Building a Web Platform: Java Technology at Ning. We'll talk about the evolution of the Ning Platform over the last two and a half years and how Java and some specific design choices let us continually grow and expand the platform, replacing and upgrading infrastructure, without affecting users or developers.

The session is TS-6039, in Esplanade 301, at 4:10 pm, so if you're around come say hello. I'll post the slides after and talk a bit more about that and other interesting things. :)

Categories: ning, soft.dev, technology
Posted by diego on May 9, 2007 at 4:02 PM

128-bit storage: are you high?

As Reverend Lovejoy would say: "Short answer, Yes with an if.... long answer, No, with a but."

Nevertheless: great article on ZFS and storage limits (theoretical and otherwise). Recommended!

Categories: technology
Posted by diego on May 5, 2007 at 3:38 PM

yes, you can use that which you had paid for

[via Slashdot]:

Up until now, manufacturers have been wary of building a device to allow this type of usage because they've been afraid a lawsuit. The DVD Copy Control Association had claimed this was contractually forbidden, but now a judge says otherwise [...].
Up next: the Department of Justice decides that it's ok to put a PC in the living room and connect it to a TV, solving the problem that has puzzled many a tech geek.

Radical notion, that of us being able to decide how to best make use of our own property... :-)

Categories: technology
Posted by diego on April 29, 2007 at 5:10 PM

Ubuntu Feisty on OS X and VMWare Fusion

ubuntu-feisty-on-osx-vmware-fusion-small.jpeg

So I tried running Feisty on Parallels but no dice -- it would boot but go no further. Additionally Parallels doesn't seem to know Ubuntu exists (they know about Xandros, but not Ubuntu?).

So I tried it in VMWare Fusion (Beta 3) and it worked perfectly. The VMWare Tools installed flawlessly as well. Performance is great, but then again, the machine does have two dual core Xeons. :-)

Running Ubuntu inside OS X is mostly useful if you want to test browsers for example or verify something platform-specific. OS X is too good a UNIX to make me miss Linux. Parallels does come in handier for XP/Vista tests, which also run pretty fast. Most of the time, though, is OS X all the way. Right now the only disadvantage that has is the lack of an official, final version of Java 6 (which we all assume will come with Leopard...), but you can get the developer preview from Apple's Developer Connection, so API-wise, at least, you're mostly covered.

Categories: technology
Posted by diego on April 20, 2007 at 8:09 PM

Edgy -> Feisty

ubuntulogo.png
So it turns out that the upgrade to Feisty Fawn was actually fairly simple, even in a headless environment. I happened to check out the Ubuntu Upgrade page and it was fairly simple.

I ran

sudo apt-get install update-manager-core

Which failed, even though I had just installed Edgy yesterday. I figured all apt-get needed was a refresh, so

sudo apt-get clean
sudo apt-get update

did the trick. After that

sudo apt-get install update-manager-core

worked fine, followed then by

sudo do-release-upgrade. After that, I left it running for a couple of hours, and at the end (naturally) it rebooted, but didn't come back. Horror! All was lost!

Well, not so much, I hard-rebooted the machine and it came back happily. All is well. Pretty good!

Categories: technology
Posted by diego on April 19, 2007 at 9:02 PM

OS X -> Edgy

X-ubuntu.png
So during my migration of old server to new stuff, I decided to experiment with using OS X as a server environment. With Apple Remote Desktop, it wasn't half bad really (although ARD is really crappy, speed-wise, compared to Microsoft's Remote Desktop Connection, thanks to its VNC roots).

It took a while, especially since I was unfamiliar with a lot of the OSS-on-OSX subculture in the Intertron, but in time I had everything set up including Fink, a LAMP stack, and such, but weirdness remained. For example, I kept fighting mysqld and its tendency to decide to not startup on boot, no doubt my own lack of knowledge of some particular OS X magic rather than an inherent lack of the feature by the OS or Mysql.

Anyway, so I was talking to Russ the other day and he very reasonably said that I was nuts for not using simply Debian or something of the sort, and last night I decided to give it a try. I was very impressed. In about 45 minutes I had Edgy server installed, with a full LAMP stack, Mysql properly configured and everything migrated. I shutdown, reconnected everything to replace the Mac, and I was done!

There is some sluggishness when posting right now, probably as a result of the crappy disk or perhaps not enough memory (although the machine seems perfectly happy), or maybe it's the fact that I'm using LVM. I will try a switcheroo to Feisty (which came out today) at some point next week when I have another hour to spend on this and perhaps I'll also try out a couple of different options. :)

Categories: technology
Posted by diego on April 19, 2007 at 4:10 PM

a couple of great tools

At Ning we use JIRA as our bug-tracking system (when we started a couple of years back we were originally on Bugzilla, but we switched to JIRA fairly quickly), and it's really a great tool, part of our development and release process really (I'll talk more about that in another post).

For intranet doc-keeping we have mostly been using Confluence (also from Atlassian) and it's good enough, but about a month ago I discovered Clearspace from Jive Software (the guys that wrote Wildfire, now Openfire) and we've been evaluating it since, well, really using it. It's a fantastic product, seamlessly integrating discussion boards with a wiki, simple ways to turning a discussion into a wiki doc, group and individual internal blogs, etc. Also, it has some nice features like searching inside PDFs that you uploaded.

Confluence is a bit more wonky than Clearspace, but it still does the job if all you need is a Wiki. However, for the combination of blogs, forum, and wiki docs, Clearspace wins hands down (except in one point: Wiki syntax in Confluence is better).

Regardless, depending on your needs, all great tools. Highly recommended!

Categories: technology
Posted by diego on April 17, 2007 at 1:41 PM

mowser!!

mowser_pc.png

So, Russ has taken the wraps off his project Mowser!

It's a "mobile browser" plus a mini mobile portal, including useful links, feeds and commands (which he calls keywords) all rolled into one.

He's still polishing some details, so you should expect some minor things to be weird for a while, but it's all there!

He's been working on this for months-- so congratulations Russ! It's awesome.

Categories: technology
Posted by diego on April 16, 2007 at 2:59 PM

and the prize for most misleading headline of the day goes to...

... News.com, with their article "Yahoo to give away email code", from the article:

The move to open up the underlying code of Yahoo Mail--used by 257 million people--is designed to spark development of thousands of new e-mail applications built not only by Yahoo engineers but by outside companies and individuals.

Hmm... now, let's see what Jeremy has to say about this:
Our Browser Based Authentication (BBAuth) is a generic mechanism that will allow users to grant 3rd party web-based applications access their Yahoo! data. There's already a similar mechanism in place on Flickr and used by services like MOO. BBAuth is the protocol that's going to open the door to doing the same thing for many Yahoo! branded services in the coming months. Stay tuned for those announcements. :-)

The first two Yahoo! services supporting BBAuth are Yahoo! Photos (API) and Yahoo! Mail (API only available to Hack Day attendees at the moment).


Okay. So, Yahoo is opening up its authentication API, and probably other APIs to be able, to, well, do something useful with the Yahoo auth API aside from signing on. This isn't peanuts, it's definitely interesting, but ... "giving away email code"? Please. Someone needs to talk to the News.com guys and explain the difference between "giving away code" and "exposing an API", pronto.

Categories: technology
Posted by diego on September 30, 2006 at 12:31 PM

did you know...?

... that Ning is hiring? But of course you did! Well, here's a reminder then. :) If you're looking for something, go check out our list of current openings at http://jobs.ning.com/. From Java developers/architects to QA engineers and product management, there's something for everyone!

(Did I say Java? Wasn't Ning about PHP? Well, the apps are written in PHP. But there's a ton of Java in there --some really cool stuff-- even if it's not obvious... but that's a topic for another post).

And, hey, if you don't find what you want in there, but you think you want to work with us, send us an email anyway. :)

Categories: ning, soft.dev, technology
Posted by diego on April 4, 2006 at 9:08 PM

microsoft gets it

I think it's about time we dropped the "Microsoft doesn't get web 2.0" meme. Scoble today has a post responding to Kaliya in which she says that to Microsoft we're just "customers". Please. Everyone tries to do what is best for both their customers and company. MySpace may "invite dudes to contribute" but trust me, News Corp. doesn't give a damn if they are called "customers" or "enablers" or "contributors". Microsoft still has some vestiges of its predatory behavior in the past (particularly on pricing of most software that hangs to stratospheric monopoly heights, ever notice that Office and Windows are the only two Microsoft products that are not at lower prices than before? Look at the stuff where MS has competition.) But on the other hand they'd probably be sued by shareholders if they lowered earnings "for no reason". Anyway, that's not really at issue here.

Look at Bill Gates at Mix06, talking to Tim O'Reilly and Mike Arrington. Look at Windows Live, and the stuff they're doing with gadgets. Look at the integration in Vista. Look at Channel 9. Look at On10.net -- whatever that's about, it's certainly not the Microsoft of old. Look at what Microsofties are discussing in blogs, from Ray Ozzie, to Scoble (of course :)), to Dare, to Mini-Microsoft, to hundreds of others (Quiz: Assumming you don't work for Google, how many high-profile Google bloggers can you name vs how many from MS? I'm not saying it's good or bad, btw, really, to each his own, I'm just pointing out the difference in, um, "engagement"). Look at Ray Ozzie's LiveClipboard stuff, which from what I've seen underwhelmed many but it just blew me away. This didn't look like a Microsoft demo at all! It looked like the demo of some dingy startup, three guys just kicking cool stuff around! Is it small, perhaps a bit of a trifle given Microsoft's resources? Maybe. But damn! There it is: A screencast, done on Flash, and running all on FireFox, using standard Internet formats. Three years ago, you'd probably had to endure some insane ActiveX plugin and a demo of how IE and Office could do stuff together using COM or some such.

If anything, that's what tells me that Microsoft as an entity gets that it must adapt, and it gets where it should go, and it's trying, really, really hard. They've opened up the floodgates to some degree -- Microsofties are doing a lot of stuff that may not be necessarily "sanctioned" or perfectly aligned with the different BU requirements of Windows or Office. This was done by necessity rather than out of some high-minded pursuit of "innovation," but that's ok, that's how these things work.

Microsoft may not be out of the woods yet with respect to the threat that web 2.0 (and, let's not forget Google) represent to their "traditional" business. But let's give credit where it's due. They're moving.

And they didn't need an "Internet Tidal Wave" memo to get them going. So hats off to them!

PS: Btw, let's not get hung up on whether web 2.0 is hype or not. Is there some hype in there? Sure. Is there something real behind it? Yep. Does it accurately describe a market space? At this point, yes. Ok. Good. :)

Categories: technology
Posted by diego on March 20, 2006 at 6:27 PM

plugging the dns recursion hole

Via this Slashdot article I was reminded about a vulnerability in DNS configs that allow recursion and therefore let the server act as an open relay that could be used in a DDoS attack. I verified my DNS using DNS Report and this matched what I saw in my config files -- my DNS server was open. Rogers had a post last week on the topic which outlined the steps he took and served as a quick guide, and along with this page of the BIND9 manual I had the whole plugged in a few minutes, confirmed by the DNS Report tool. Phew!

Categories: soft.dev, technology
Posted by diego on March 19, 2006 at 11:47 AM

evernote

evernote.png

One app I've been trying out for a few days now is EverNote. It's truly, really well done. It does what it's supposed to do, and it does it well. Great UI, including some cool UI concepts like their accelerating scrollbar. I've also tried OneNote, but I've been underwhelmed. Too much complexity really, for a simple task like taking notes. EverNote also has good integration with browsers, including a plugin for FireFox 1.5. If you need a good piece of note-taking software, look no further. Highly recommended.

Update: Russ tries out EverNote but is confused by the non-standard UI. Good point, I forgot to mention my take on this. EverNote does use a different UI than we're used to, and it definitely has a bit of a learning curve. My opinion, however, is that our current UI paradigms are broken. They just don't work. So we will need to change into new ones, and given that pretty much anything new and more efficient will involve some sort of shift and relearning. I think EverNote pushes the envelope enough to make things better but not so much that you have to spend hours learning what to do. That's my 2c at least. :)

Categories: technology
Posted by diego on March 17, 2006 at 9:40 PM

filebox: a quick way to share files

fileboxitbutton.png

Many many times I want to quickly send a file to someone for them to look at, and I can never remember the names of the services that let you do this. But there's Ning! :)

So my 1-hour hack for tonight was to create filebox.ning.com, which allows you to upload files and then share the link with others, and it's deleted after a few days. Basically I cloned Brian's filedrop, modified some things in the code, made the uploads private, added messaging, and made it a little easier on the eye. The power of Ning at work. :)

Categories: ning, soft.dev, technology
Posted by diego on March 11, 2006 at 12:20 AM

windows vista: me likey!

vista.jpg
For about a week now I've been using Windows Vista (CTP, Build 5308) in my machine at the office--at home I am still running XP-- which we got through MSDN. The CTP is of the "Windows Ultimate" flavor which includes Pro+Media Center+Tablet PC functionality. Nice. I dared to install it at the office because if that machine couldn't handle it, I wasn't sure which one could (Dual Core P4, GeForce 6800, 3 gig of RAM... you get the idea). It's an early beta (they're probably still six to nine months away from release) but it's pretty stable save for a few wrinkles here and there and the occassional slowdown. But I'm getting ahead of myself.

The installation process is really, really good. Pop in the DVD, click "install" and wait for a couple of hours. In my case, I did that as I left the office one night (yep, perhaps I'm a bit kamikaze that way) and when I got back the next morning, the machine was botted on Vista. That was it. No clicks, no questions, it just worked.

Once I booted the first thing I loaded was IE 7. I wanted to see how well it worked. Boy, was I impressed. Network transfers were super-fast. We huddled around it and wondered if IE was doing something funky, but whatever was going on also extended to FireFox, which loaded webpages at least 50% faster than before. I use FasterFox, so I know I wasn't dreaming. We compared the speed of loading the same pages on similarly configured machines (both Linux, Mac and Win) and Vista blew them all away. Sometimes twice as fast. WOW. I guess that the rewritten TCP/IP stack does make a difference after all.

There were a couple of problems with the install. First, I had Norton Antivirus running and it just can't be uninstalled for some reason. I can't stop it from getting in the middle of other processes, such as downloads either, and I haven't had the time to look for a solution. Probably a registry setting. At any rate, it's very annoying. The really annoying feature is the new user security they put in, essentially downgrading user privileges so you have to "sudo" for some operations. What's frustrating is that my user is an Administrator, there is apparently some hidden Administrator user that I don't have the password for, and there are folders that I can't delete. Period. I can't even look into them. That is just plainly idiotic. It's MY machine. Give me all the warnings you want, but let me access my own files damn it! I could bet that NAV is getting confused with that too, since it plugs in at a pretty low level. Other minor problems: the video driver Windows installed on its own was old -- but getting the update for Vista from nVidia fixed that. Then Powerarchiver wouldn't work at all, but WinZip did just fine. Oh, another tiny thing: when I shut the PC down, it bluescreens for just a second. I'd bet it's Norton again. But it doesn't affect anything. So it's fine. :)

Those wrinkles aside, the experience has been pretty good. Vista is fast and stable. Search has suddenly become useful. I can now search Outlook messages from the OS (and open them) faster than from within Outlook, and I didn't have to install or configure anything. Good one.

The hardware requirements are pretty steep at this point to run it properly, but I think those will come down as it gets optimized. As I understand it, build 5308 is the first one that is close to being "feature complete," so there still must be a ways to go in the optimization front.

Let's see, other cool things. The new sidebar (a copy of Konfabulator) is cool, and there's a fairly web2.0-ish site that deals with "gadgets" that apply (the site, not the gadgets) to both Windows Vista and Windows Live. The famous "Flip 3D" view is pretty cool and even (gasp!) useful. But, I searched aaaall over the place to see how to use it, and everyone talked about how cool it was but NO SITE said how. So here it goes: Windows Key + Tab. Alt-Tab is the regular switch, but with "live views" of the windows scaled down. WinKey+Tab is the Flip 3D thing. There. I said it. The secret's out (hold on for a million comments telling me where the obvious place to look was). :) You'll need a spiffy video card to use it, but it will work well if your system can handle it.

So far then? Aside from the ludicrously bad file permissions thing, I am very impressed by vista -- in fact, I'm sold. No WinFS, true, and it took a hell of a lot longer than it should have, but Vista is definitely going to be a good upgrade from XP I think (damn, I sound like Scoble!). Anyway.

As the Firefly characters would say: "Shiny." :)

Categories: technology
Posted by diego on March 8, 2006 at 9:36 PM

A free VMWare?

Now that would be cool. VMWare, for its reliability, features, and simplicity, is one of my favorite sofware products of all time. News.com says the free server edition is just around the corner. Now what about the workstation? I'd bet that half the development world would rush to download it and start tinkering with it. Speaking of tinkering. VMWare APIs, anyone? :)

Categories: technology
Posted by diego on February 2, 2006 at 6:48 PM

opera mini: awesome

On Tuesday Russ sent me an SMS with a link to get Opera Mini as soon as it came out and I have to say I was massively impressed. This is not really a review but just a small comment -- Russ has more here. Even considering the install was a bit wonky due to the typical hoops we must jump through to install non-carrier-sanctioned stuff on a phone, and that afterwards it ends up being shoved into the "games" section (at least on my RAZR) it's still worth it. Fast and usable, finally a browser for phones that doesn't suck. If you have high-speed data services for your phone and were wondering what to do with all that paid-for bandwidth, check it out.

Categories: technology
Posted by diego on January 26, 2006 at 12:35 PM

blogs and discourse in software and politics

Salon's Scott Rosenberg muses on What journalists can learn from software developers and points to my recent exchange with Mike Arrington as an example. Thanks!

Which reminds me of what I wrote a while back in rethoric, semantics, and Microsoft and some of the stuff (such as the "rules to posting") in my introduction to weblogs.

Political discourse definitely feels a bit shrill these days. Then again, when reading the history about other times of relative upheaval (about, say, the late 1700s/early 1800s period in US history, the Civil War, or World War II) it's striking to see that they weren't all that "nice" back then either, in fact the early days of the republic were pretty vicious in terms of political rethoric and even backstabbing (witness the falling out among some of the Founding Fathers). I think it's just that it was much harder for tempers to flash out of range, since by time involved in sending messages back and forth allowed for some cooling-off period -- and live debates weren't watched by millions of people at once, who also had to respond asynchronously. I wonder if it's just a matter of slowing down a bit then... :)

Categories: technology
Posted by diego on January 24, 2006 at 1:47 AM

movable type 3.2: best MT yet

I've been meaning to post this for sometime now, but I keep forgetting: MT 3.2 rocks. The comment/trackback spam management is excellent, and pretty effective at stopping 99% of spam coming in. The new template management was a bit confusing at first, but all's well now (I still haven't converted everything or even all that much, and only this morning I fixed search, which was broken due to misconfiguration). I remember the conversion to 3.2 to be mostly uneventful except for a few minor hiccups. All-in-all, a must-have upgrade. :)

Categories: technology
Posted by diego on January 23, 2006 at 3:23 PM

quote of the day

10:40 Outside, a guy talking into his phone: "Well Steve Jobs is a fucking Jedi Master of this shit compared to these other clowns." from Engadget's coverage of Yahoo's CES Keynote this morning.

Categories: technology
Posted by diego on January 6, 2006 at 3:01 PM

movabletype 3.2

I recently switched over to MT 3.2 (which came out in August I believe) and it's been an improvement on a number of fronts. In particular spam management for both comments and trackbacks has become way easier, not painless, but less of a pain. There are some new features in it that I haven't had time to explore (the template setup is different) and for example search within the blog is now broken and I'm not sure why (an "alternate template" is missing). I guess I'll have to do some digging this weekend.

Categories: technology
Posted by diego on November 25, 2005 at 10:38 AM

xbox 360: a review

360logo.PNG

While I am indeed pretty busy that doesn't prevent me from, say, sleeping even less and every once in a while using that "extra" time to to play games, on an original xbox. Initially I wasn't going to get an x360, but it so happened that a nearby Best Buy decided to allow preorders and off we went last Friday.

booting up
By now I've got a fairly good handle on the dozen or so different ways to connect A/V systems (composite, component, Svideo, DVI, optical or not optical sound input and all the variants they create), and more assorted lengths of wire than Professor Farnsworth. So once I got home connecting the system was fairly painless. On first boot things looked pretty good, although the image quality was a little off, even with HD settings (my TV supports 480p and 1080i)--but more on that later.
xbox360-small.JPG
Before that: oh, the power brick. It feels like a brick, both in size and weight. Probably a third of the volume of the xbox itself (on the left in the picture). It is completely anachronistic next to the sleek design of the rest of the system, but so be it. Power has to come from somewhere no?

The controllers themselves include a few crucial improvements over the original xbox controllers, most notably the option buttons that used to be to the left and right of the joysticks are now better distributed -- start in the center, and the white/black option buttons are now on the front, above the triggers. Much better. The remote control (included as a bonus right now in the pro configuration) is also pretty good, although for some strange unfathomable reason the remote is infrared whereas the control is wireless.

Simple things, like being able to turn on the xbox from the controllers or the remote, are actually big improvements. The dashboard can be accessed easily from within a game, navigation is easier, etc. The network setup was buried behind several screens, but I was glad to see that it supports both WEP and WPA, WPA-PSK in particular (although they call it "WPA password" for some reason).

Overall, the initial setup experience was fairly painless, but I think that non-techies may find it confusing, particularly for the wireless setup options, and especially if they've got secure wireless setup by someone else.

that media center thing

Configuring the 360 to do streaming off my PC's music collection took about 10 minutes, although a couple of false starts were involved (PC not finding the xbox, then the xbox not finding the PC), and I think that people without PC experience will probably find the process fairly confusing, but in my case it worked out well. The XP-based application is easy to install and configure (at least by MS standards, again - having to dive into the control panel to change settings is something that will confuse a lot of people methinks). So suddenly I have a single repository for my music and photos--not bad!

Video support seems to be limited to Windows Media Center (I have XP pro), but I don't have high hopes on that, I assume they support MPG2 and WMA, which is not usually the format in which I store video anyway. Strangely enough, there's a section for "movies" but it contains a few trailers that come built in, and that I can't seem to be able to get rid of. GIven that the x360 comes from the factory with only 10gig free of disk space, that's pretty bad. If they hadn't included PC integration, no one would use it for media playback -- there's just not enough space there to do anything useful.

But I digress. The integration, once set up, is pretty seamless, although I'm so used to the ipod's waterfall menu mechanism/navigation that the linear behavior on the xbox felt somewhat clunky. Regardless, it was really cool to get more juice out of the music on the PC as a bonus (you could do that with the original xbox, but you needed another CD and I never took the time to set it up--or you needed a modded box).

the games

Holy moly, the difference in game experience with HD and surround is amazing, particularly with a large screen. Project Gotham Racing 3 is fantastic ... once you actually get HD set up. I mentioned above that I configured the box for HD/widescreen within the dashboard, but there was another (annoying) step to get through: flip a tiny switch on the video component output cable from "TV" to "HDTV". In setting stuff up and naturally not reading the manual you miss this crucial step and things look good but not that good. Anyway, with the switch in its proper location, the image was just gorgeous. For once, the game trailers really reflect well the astonishing quality of the graphics. The menu navigation of the game is actually more confusing than PGR2, but they've made some small improvements that make gameplay better.

As for first-person shooters: Call of Duty 2 is excellent, but it suffers from lack of a cooperative multiplayer mode. After having tried Ghost Recon 2: Summit Strike on the original xbox, I'm much more interested now in those kinds of games than in standard deathmatch (although that's definitely fun :)). Quake 4 is good, but nothing shockingly new: think Doom to the nth power. Then there's Perfect Dark Zero, which has some interesting new takes on the FPS genre, and looks great as well.

The surprise for me was the "Arcade" game category in the xbox live section: you can access lists of games and download trials or buy full versions of classics with "microsoft points" (on this topic, btw, I agree with Russ: I don't like them for multiple reasons, but I don't think they'll get rid of it anytime soon).

backwards compatibility

The x360 runs a good amount of original xbox games, but not nearly enough. When they do run, however, it's pretty amazing. Halo 2 for instance looks better and runs perfectly well. This is definitely an achievement considering that the original xbox ran on a P3/Celeron and the x360 one runs a multicore PowerPC. There definitely seems to be some fairly complex emulation code there, assumming that what they're doing is indeed pure or mixed emulation.

final thoughts

Overall, the x360 is really, really well done and the games are excellent. Microsoft has definitely done a good job with it, wrinkles notwhitstanding. There's been some discussion for example on this slashdot story (and then echoed all over the place) on how the 360 is "very unstable." If shreds of anecdotal evidence is all we're going with, I know of 5 xbox 360s, 3 pro and 2 core systems, all of which run without a glitch. It very well maybe a heating problem. That would not be surprising considering that this little box packs as much processing power as a modern PC (and look at the fans and heat exchangers on those things).

One thing that does bother me a bit is that there seems to be a lot of information about what you're doing and what you've done broadcast by the system to others (just entering into an xbox live game once seems to grab that list of people forever, and then you can see their status) with no obvious way of turning that off. I assume privacy settings are in there somewhere, I'll have to look for them later.

Anyway, given the price and supply issues it's certainly not a device for the masses yet, but it's definitely something worth trying if you can. The next big game to be released will be IMO Ghost Recon: Advanced Warfighter. But that's in February... in the meantime, there's a bunch of cool stuff to try out :).

Categories: technology
Posted by diego on November 23, 2005 at 10:37 PM

grokster shuts down

Grokster has shut down as a result of the Supreme Court decision a few months ago. The website now reads:

" There are legal services for downloading music and movies. This service is not one of them.

Grokster hopes to have a safe and legal service available soon."

Right. Because that strategy worked great with Napster didn't it? Even if the new, "safe and legal" Napster continued limping along, its user base was completely gone, and so it will be with Grokster. I honestly cannot understand how the RIAA and others think this plays out in the end. You can't uninvent technology, and all the lawsuits accomplish is to push it further underground... and through these aggressive moves they only stifle investment and research on the topic, which is really the only way a happy balance can be found.

There's no way out, there's only a way through.

Categories: technology
Posted by diego on November 7, 2005 at 12:25 PM

synergy: wow

Here's something really cool: synergy. A GPL'ed tool to share mouse and keyboard across different systems with Windows, Linux, or OS X. A virtual KVM, if you will. You need two monitors, and two machines. But if you've got that, this tool is a must-have. Most excellent.

Categories: technology
Posted by diego on October 13, 2005 at 9:25 AM

@ foo!

Got to Foo Camp relatively late, around 8:30 pm, after picking up Drew Endy in SF (he was at a conference there, and since I was on the way north, I gave him a ride up to Foo Camp. Drew (and those in his lab at MIT) are doing mind-blowing stuff in trying to figure out how to code DNA to make it do what they want. "DNA on Rails". So even though we got here late, that unique part of Foo had already started, at least for me (maybe not so much for Drew, since I basically pestered him with questions the whole time). I was thinking that what I'd like to do from the top down (make software behave more like organisms) is what they're working on doing from the bottom up (figure out how to code to the software that organisms already have). From one end it's getting the black box and tries to figure out how to make it do what you want. From the other, it's trying to emulate the external behavior of the black box and make other things do what the black box does. Box are way too interesting to make them justice at midnight after an exhausting day. :)

I did miss Tim's kickoff presentation, which was incredibly interesting when I saw it at EuroFoo last year.

All rooms have been taken and I couldn't get a hotel, so I'll be sleeping in the car. Since I won't be sleeping much, I don't anticipate it will be too bad. :)

Categories: technology
Posted by diego on August 19, 2005 at 11:35 PM

how to install comcast high speed internet: a quick guide

I would generally not post about something as specific as this, but after seeing the vast amounts of confusion and misinformation that's out there on the topic I felt that my 2c in this case would be worthwhile.

Today I got cable along with Internet service from Comcast, and one of the potential that I wondered about was the setup. I've got a Linksys WRT54G for both wired and wireless routing. Most of the comments out there mentioned a process that usually included first connecting the computer directly to the cable modem, then registering it, then disconnecting it, then connecting the router configured to spoof the MAC address to that of the computer.

However, this didn't seem to make a lot of sense. While the MAC address of the cable modem does need to be registered with comcast, it seems less obvious why they would need the MAC address of the PC itself. Maybe at some point it was necessary, but as I discovered today that's no longer the case. )As a sidenote, the cable guy that did the install of the line also said that the "pc first, router later" process was a requirement... so maybe that's how this got started).

To make sure this remains a quick guide, here's what I did (as far as I can tell the procedure would be the same anywhere in the US for a Comcast high speed Internet install):

  • First, I got my own cablemodem, a Linksys BEFCMU10 since the price was equivalent pretty much to a year of renting whatever Comcast was going to give me, it seemed like a good investment. I don't think this makes a difference though.
  • Once the connection was complete and running, I connected the Coaxial cable to the modem, then connected the modem via ethernet to the WRT54G's WAN input, and the PC to one of the ethernet ports of the WRT54G, then I powered up both the cable modem and the router. (The router must be set in DHCP mode).
  • After a bit of waiting, the router directly obtained a dynamic IP and all seemed well. Loading up any browser page in the computer redirected me to a comcast page which included a link to download the software that comes with the Comcast self-install kit. I don't even need to insert a CD!
  • Downloaded the software, then executed it. It turns out it's basically a specialized IE window, doing HTTPS requests to Comcast in the background. Here they actually activate routing for your cable modem, but at the same time they're assuming that the PC is connected directly to it, and so they seem to reconfigure the modem to talk to the PC's network card (which isn't connected directly to it). When the registration and configuration is done, they reboot the modem automatically, at which point nothing works.
  • Aha! The reboot seemed to be key.
  • So, I did a hard restart on both the cable modem and the router (first disconnected the router, then reset the modem, then reset the router, then reconnected the router) at which point the cable modem seemed to find the router agreeable once more.
  • And that's it!
From what I gather the modem does attach itself to one MAC address, but that is reset when it reboots, so spoofing the MAC address (which sounded like a weird requirement in the first place) doesn't appear to be an actual requirement. The MAC address of the cable modem itself is registered with Comcast, so you'll need to let them know if you get it replaced.

After all that, speeds are pretty great, 3 Mbps down, 768 Kbps up (or more), and it's working well so far, keyword here being so far since there were some signal strength issues that made installation more tedious than it should have been, and those gremlins have a way of showing up again...

PS: I also got HDTV with Dolby Digital. Not that many channels, but one word: Wow.

Categories: technology
Posted by diego on August 16, 2005 at 11:47 PM

back to PST!

I just realized that the weblog was still "on" GMT (where I have spent most of the last 4 years) but now it's time to switch that, too. And done!

Sure that was easy. But I wonder, now. MT seemed to be intent on rebuilding all entries based on the new target time, but that was just appearances. It offered a rebuild but did nothing of the sort. No changes. All the dates that were wrong are still wrong. I wonder if there's a button to push somewhere. Has the timezone info been lost?

And what's the right etiquette? Plus, the timezone in itself contains some interesting information (for me at least), but only if it's accurate. Accuracy there would be hard to guarantee though, the browser can obtain some of that information from the OS, but that's not even accurate sometimes... Hm. Now if a GPS was permanently connected and feeding data into the system permanently and seamlessly...

Categories: technology
Posted by diego on August 15, 2005 at 11:42 PM

quote of the day

"What I really need is minions." -- Russ.

Categories: technology
Posted by diego on August 14, 2005 at 7:39 PM

geography, software, and context

While I've been sort of away from blogging (and keeping up with blogs as well, I'm now beginning to catch up) as always I've spent time running around looking at the new apps and ideas that are thrown out there all the time. Of these, I've spent a fair amount of time thinking about mapping applications, for one reason: they puzzled me.

All that follows is probably obvious to many people, but hey, I won't mind that much for arriving late at the party and just be happy I showed up at all. :)

The puzzle was the sudden explosion of use and why it had caught me blindsided. Blindsided, in the sense that they'd be so popular. I've been looking at geolocation apps for a while now (the wireless ad hoc research community, where I spent some time during my thesis research, has been babbling about geoloc for quite a while now). But it had never caught my attention all that much. Partially the blinders that come from focusing on one topic are responsible, but on the other hand I found it hard to see them as more than niche applications. GPS: useful if you're traveling but most of us don't spend our lives in the car or train, and usually quickly develop enough knowledge of our transportation paths that a GPS can become redundant quickly when a route is coupled with habit. This is less true in the US, given that there's higher mobility and greater distance between places, but still you can't see using geolocation as something more than something that would be use ocassionally at best. And so on.

Geolocation is cool, I thought, but it's not massive, as say an RJ-11 plug is massive or as the Internet is massive. True, there is value is niches, and we could start yet another socratic discourse on the long tail, but...

When I started thinking about this, I began by questioning whether these apps where actually niche apps. In the end I realized that the apps are niche apps, but not just any kind: they connect meatspace with cyberspace.

That is, they provide context. Many people (me included) enjoy the geography of the virtual. We can see the towering structures, or as Gibson put it on Neuromancer, "lines of light ranged in the non-space of the mind, clusters and constellations of data. Like city lights, receding..." But that's only one half of the story.

If there's anything truly unique to pervasive computing, sensor networks, clothing with embedded circuitry, and a million other things that are going on right now, it is that they seek to provide a seamless bridge between the real and the virtual. Not a one-way connection mind you, but bidirectional. The ability to interlock devices with what's around us in a way that creates something that neither alone can provide. Example: object geotagging. Leave a marker using a webservice, automatically labeled with your Long/Lat, and others that walk by can find it. Have your clothes keep track of your movements and of who you meet, and let your shirt buzz softly against your skin when it realizes that you have an appointment in 30 minutes across town and you are still at a cafe, sitting still, with someone you know nearby ("someone you know" being defined in quick and dirty fashion as the same marker showing up in your travel history for the last 6 months, which makes it likely you won't just get up and leave that easily).

It's all about context, and it's not just geography--geography is just what we've suddenly got a critical mass of data on and the APIs to go with it). Geolocation is the tip of the iceberg. In itself, it's interesting, yes, and useful to varying degrees depending on your habits. But it's way more interesting for what it portends: finally, the emergence of real-world applications of context to software, and of software to the real world.

Categories: technology
Posted by diego on August 7, 2005 at 5:09 PM

the age of the mix

William Gibson, writing for Wired: God's Little Toys:

[...] I already knew that word processing was another of God's little toys, and that the scissors and paste pot were always there for me, on the desktop of my Apple IIc. Burroughs' methods, which had also worked for Picasso, Duchamp, and Godard, were built into the technology through which I now composed my own narratives. Everything I wrote, I believed instinctively, was to some extent collage. Meaning, ultimately, seemed a matter of adjacent data.
Yep. Technology is freeing creativity in part because it makes the collage technique much easier. What we create (write, compose, paint, even code) obviously has some precedent in our lives--one way or another. Woody Allen does collage with his own life, and what we usually call a reference in literature walks a thin multidimensional line between collage, homage and inspiration.

My favorite paragraph from the article, however, is this one:

Today, an endless, recombinant, and fundamentally social process generates countless hours of creative product (another antique term?). To say that this poses a threat to the record industry is simply comic. The record industry, though it may not know it yet, has gone the way of the record. Instead, the recombinant (the bootleg, the remix, the mash-up) has become the characteristic pivot at the turn of our two centuries.
Concise, yet encompassing. Pure Gibson.

The age of the mix is here.

Categories: technology
Posted by diego on July 7, 2005 at 10:38 AM

nvu

nvu.png

An open source website editor: nvu now in 1.0. Pretty good CSS integration (a bit clunky, but useful) and straightforward UI. Plus, it's available on Linux, OS X, and Windows. Nice.

Categories: technology
Posted by diego on July 5, 2005 at 10:07 AM

lemmings!

[via Dare] DHTML Lemmings. Perhaps predictably there are legal questions surrounding it (even if no one has bought that game in years, you'd think that the owner of the copyright would be rushing to buy this thing and put it online...). It's really, really cool. Go check it out.

Categories: technology
Posted by diego on June 24, 2005 at 3:09 PM

the network is the disk drive

Sean McGrath: "XML is not - repeat NOT - a 'file format' in the sense that most people use the phrase 'file format'."

Here's his article. Of course he's right. XML is a specification for formats, not a file format itself. It has high-level semantics for defining specific semantics.

But, in any case, when so much of the data we consume flows across the network without ever settling in a well-defined filesystem location, the idea of a "file" stops being so important methinks. Even on the desktop side we are thinking more and more in terms of pieces of information, emails, IMs, webpages--not files.

I am reminded of a short-term backup system someone once described to me: every number of hours a backup would be created, then sent out through SMTP to the company's subsidiary in Australia, where another program would pick it up and then, through direct streaming, send it back. At any point in time there were a number of copies flowing through the system--and if something happened and you needed a certain copy for a certain date, all you had to do was wait for the right piece of data to circle back to you, and you were done (there's a case where a super-fast network would actually be a liability). The same could be easily done today through a couple of daemons and webservices running on top of apache--Rather than Sun's "The network is the computer" it would be "The network is the disk drive".

I forget what my point was.

Ah yes: "A keyboard! How quaint!" (That's Scotty speaking for all out there not fully up to date on your Star Trek arcana).

Let us all put the quaint notion of files and formats behind us and think about where that may take us...

Categories: technology
Posted by diego on June 23, 2005 at 1:09 PM

msr's "bittorrent killer"

Kevin Schofield, on the recent brouhaha surrounding Avalanche. Quote:

Um, let me get this straight. In six days, a research project went from some algorithms in a paper to Microsoft's competitive answer to BitTorrent, to "vaporware" to an evil conspiracy."
MS/MSR is getting a bad rap on this one. They've been working on p2p for some time now, it's just that they've never gotten attention for it. Pastry for example is a well-known (at least within the p2p research clique) overlay network project that has been around for more than two years. And their stuff is pretty good, too.

Now that I'm getting an education on certain effects of sensationalist press coverage, I can't say I'm that surprised. At least weblogs help in getting the other side of the story out there...

Categories: technology
Posted by diego on June 23, 2005 at 12:26 PM

anne's consulting practice

Anne now has a new website for her consulting practice. Ethnographer, anthropologist, ui-social-network-wireless-pervasive-computing-thinker-designer (yeah, buzzwordy, but true), all in one. When she's done with your project, she can also explain the social structure of the Incas, which apparently didn't have computers, cellphones, or wore Nikes. My theory is that this is why they couldn't stop the Conquistadores, but she says I'm wrong for some reason I can't quite follow.

Anyway, if you need someone to analyze and improve whatever cutting-edge product you're cooking, she's your woman!

Categories: technology
Posted by diego on June 23, 2005 at 11:52 AM

the AlwaysOn/Technorati 100

Congrats to Russ on his appearance in the AlwaysOn/Technorati 100 list in the 'practitioners' category! Also to "the Scotts" (Feedster), Om, Jon, Rael, Jeremy, Robert, Matt and... well, everyone else on the list -- I'll stop before I just duplicate a list that seems to strangely match my aggregator subscription.

Russ comes first, yes. He's a good friend. Plus I want to be on his good graces for the next Halo match. :)

Categories: technology
Posted by diego on June 22, 2005 at 5:31 PM

24 hour laundry: the view from inside

Well, well, well. :)

There's been a lot of discussion recently about a certain new startup called 24 Hour Laundry. It pretty much got started with this CNET article, then as highlights we've got Om, Mark. Even (perhaps predictably) Slashdot.

24HL, as it happens, is where I work. Remember this?

Yep. It's true. Aaaaaall this time and I didn't say anything. Outrageous! How could I?

Well, that's kinda the point.

You see, we didn't want to make any noise. CNET decided that they wanted to "scoop" a story that didn't exist (and is still not all that exciting at this point). We didn't have anything to do with that article.

Then, in the process of not asking for any press and minding our own business, we get branded a certain way, and told we are doing something wrong by focusing on our product.

What is confusing to me is that some of the comments out there begin with "Well, I don't know what they're doing but [insert your thought about why it's wrong here]".

It is one thing to speculate (which we all do a lot of, don't we) and draw tentative conclusions based on that, but it's another to take those assumptions and then categorically "paint a picture". I know: to a certain degree, these are the rules of the game. But there is a difference between saying "If X is doing W, then here are the problems I see" and saying "X appears to be doing W. They're crazy!" This was partially Russ's point with his great post yesterday. (Update 6/23: Jeff Clavier also makes good points on the topic).

For example, Mark Fletcher said:

[...] But creating a new web service is not rocket science and does not take a lot of time or money. My rule of thumb is that it should take no more than 3 months to go from conception to launch of a new web service. And that's being generous. I'm speaking from experience here. I developed the first version of ONEList over a period of 3 months, and that was while working a full-time job. I developed the first version of Bloglines in 3 months.

In other words: "whatever it is you're doing, you should be able to do it in three months."

Ah, those pesky generalizations--but this is actually an interesting point to bring up. Last year, it took me about 3 months to write the first version of clevercactus share, which didn't just include a website/webservice, but also an identity server, a relay server (to circumvent firewalls) as well as a peer to peer client app that ran on Windows, Mac and Linux.

One person, three months. Webservice, servers, clients, deployment systems, UI/design, architecture, code, even support.

Which proves... absolutely nothing.

You have to fit the strategy to the company and not the other way around. In our case, we're doing something a little different (not better, just different) than the next web service, so we're just trying to keep our heads down until we have something that makes sense.

Of course we want to release as quickly as we can. Of course we know that when we launch there will be dozens of features we wanted to add but didn't have time for. Of course we keep in mind that we can't release a "perfect" product.

We absolutely want to involve users in the product's eventual evolution. We just want to make sure that we have a few things figured out before we start sending out press releases to announce our video-blogging social scooter company.

We appreciate the patience, and the interest (even if in some cases it's a bit misguided!). We are working as hard as we can, as fast as we can, to come up with a good product.

Sounds reasonable? :-)

PS: this may be a good time to add "This is my personal website and blog. The views expressed here are mine alone and do not necessarily reflect those of my employer."

Categories: personal, soft.dev, technology
Posted by diego on June 20, 2005 at 11:17 PM

the apple switch

Now that the details are out, it's a good time to really see what's what. There was some feverish speculation over the weekend regarding the switch, what it meant, and whether Apple really was ditching PowerPC or merely going to ask Intel to produce powerPC chips, or something entirely different. The PPC-by-Intel idea never had legs, in my opinion, because even if Apple could take the IP away from IBM/Freescale (Freescale is the Motorola chip spinoff) Intel wouldn't have been able to get up to speed on actually improving those chips better than IBM without, well, buying IBM microelectronics. The problem for Apple with PowerPCs had less to do with production (although that was a factor too) than with power consumption and the roadmap of PowerPCs going forward. And given that Apple represents some estimated 3% of revenues and 2% of profits for IBM Micro, it was hard to see how they could get them to do Apple needed, particularly since the aggregated yearly market for consoles is going to be some ten times bigger than Apple's.

The news "leak" was clearly done in a professional manner, to "soften up the ground" shall we say. It got everyone in the computer world to focus on Apple for days, before and after the event. Masterstroke, really.

This so-called "secret life" of OSX is a bunch of baloney, I've heard for years that OSX ran on Intel, and it absolutely made sense from a technical perspective, because of its Mach microkernel and BSD core. Keep in mind OSX had already been ported out of Moto's 680x0 to PowerPC (from its previous incarnation as Nextstep).

That, and the universal binary idea (ie., single binary for both PPC and Intel) is an example of why Apple can pull this off. Not only they're the only computer company to have survived major transitions in the past, they have time and again delivered environments that executed previous code with fairly good accuracy. And--these days processors ain't what they used to be in terms of lock-in. Linux is proof enough of the underlying shift that has made this possible: higher performance, which leads to most of app's code being written in high level languages (C/C++/Objective C and up) not to mention the rise of Java and scripting languages.

Additionally, the majority of end-users really care about a few things: web, email, document processing, and maybe games. Then there's specialty apps, like Photoshop, but those will get ported no matter what--and many of those already have Windows+Intel versions, so it's not as if the in-house knowledge doesn't exist. Everything else can run on the emulation environment.

And it is the web, particularly, that has really enabled this transition. Even if data isn't going into the web, it is at least going through the web to a large degree. Local datastores can get converted these days between different formats, which are way more open than they were 10 or even 5 years ago.

The web is, to me, the embodiment of the real shift that's happened in the last five years. We have gone from caring about applications to care about data. And it's a good shift. We aren't done with this transition, but things are definitely going in that direction. Data rules, and the device with which you access it matters less. All of which creates more competition, lower prices, and, one can hope, better quality. Hear, hear!

Categories: technology
Posted by diego on June 7, 2005 at 5:02 PM

report: apple to release OSX for abacus, paper

from the fake-as-news dept.

CUPERTINO, CA -- On the heels of rumors of Apple's intention to switch from PowerPC to x86 processors, comes the stunning revelation that the company is looking further ahead to make an even bolder move: a change to what Apple "iCEO" Steve Jobs described as "the shift to neural computing."

Apple enthusiasts who on the Internet had described the port to x86 alternatively as "undisputably moronic" and "the greatest thing since the last Finder update" where stunned to the point silence for several seconds when hearing the latest news.

In its most recent quarterly investor newsletter, Apple had cryptically disclosed it had acquired a controlling share of The Friendly Woodchuck Paper Mills, a Tuskegee, OK-based paper production concern. This move had investors and analysts scratching their heads, until this week's disclosure of Apple's skunkworks project dubbed "iWritePad".

"The iWritePad is going to change the way we think about computing," said an Apple official that wished to remain anonymous. "You get a blank slate on which to scrible free form text, maintain ToDo lists, record your thoughts, even create architectural drawings, using the iPen. It will also come with an iAbacus built in the package for quickly performing mathematical operations. Unlimited storage, and all natural product. Acid free, too."

But is this a brilliant move, or folly?

"The next logical step after moving from PowerPC to x86 would appear to be a switch to 680x0," said an Apple spokesman. "But Apple is all about innovation, so we're skipping steps. And those that say that these are moves in the wrong direction just don't know the meaning of the words 'wrong' or 'direction'. Or of 'word' for that matter."

OSX for iWritePad will include an advanced "iInk-based" "icon" "system" to "navigate" between the different "pages" of the "device". While pricing has not been finalized, the plan is to release the product at $299 per pack, which would include one iPen and 50 "letter" sized "pages".

"300 bucks seems a little bit much to charge for pen and paper," A skeptical analyst remarked, to which the Apple official snikered "Does paper come with shiny blue-white headings and a cute little Apple logo? I think not!"

The Apple official also hinted at something even bigger in the pipeline, an update to their fledging digital music product line, and one which has fueled the company's recent profits. Asked to describe this project, the official refused initially but then added. "I have one word for you: Vinyl."

Categories: technology
Posted by diego on June 4, 2005 at 7:24 PM

x86, watch out: PowerPC is coming!

ppc440epmod_200px.jpg

In all the hoopla in the last couple of weeks about the new game consoles announced at E3 by Sony (PS3), Microsoft (XBOX 360) and Nintendo (Revolution), one item has received little attention: every one of them runs on PowerPC chips.

Am I the only one that finds this significant? Think about it: every console after 2006 will have a PowerPC chip in it, all of them multi-core. Cell, the processor in the PS3, will have nine processing cores. (Intel is just now getting to release Pentium Extreme Edition with two cores.)

Compared to the PC market, the console market is small (about 15 million units a year I think). But Apple is also running PowerPCs, and with consoles clearly positioning as favorites for the "home media center" title (and Apple expected to do something along those lines with the Mac Mini), this seems like the beginning of an important shift, and I suspect it will become pretty significant over time. Intel in particular has been struggling to extend beyond their core market of PCs, and this is clearly another blow to them, and is under pressure from AMD as well.

IBM! Think back to its position in 1995... the tech industry certainly has surprises in store for everyone. :)

PS: btw, the PS3's specs just blow the XBOX 360 out of the water (just in terms of processing power, it's 2 TFlops against 1, and it has amazing compatibility, integration with PSP, bluetooth, BluRay DVD, and more). Now the question is whether the early launch of XBOX will make up for that or not...

Categories: technology
Posted by diego on May 25, 2005 at 9:23 PM

an unexpected upgrade

Something unexpected happened during the time I was away: Eircom has, astonishingly enough, upgraded my DSL connection. I'm now regularly getting download speeds of 1.5 Mbps and sometimes more. The upload speed appears to be stuck at 128Kbs though, which is still a problem for Video/Voice communications.

I still have to check that they are not charging me more too (not likely, but it wouldn't surprise me one bit), but good news anyway. :)

Categories: technology
Posted by diego on May 24, 2005 at 10:06 AM

samurize

Samurize, similar to Konfabulator (but only for Windows). Probably well-known to many, but it was news to me when Martin showed it to me last week. Cool!

Categories: technology
Posted by diego on May 24, 2005 at 9:40 AM

so that's why I'm not getting notifications...

I was just looking at the release post on movable type 3.16 when I notice this item:

Fixed email notifications for entries and moderated comments which were broken for some users in the last release
D'Oh! And I had happily concocted the theory that SpamAssassin or some other unknown gremlin was to blame. Funny. It never occurred to me that movabletype was to blame. It never even crossed my mind! I wonder why. Not likely that I'll overlook that again though.

Anyway, I just upgraded and it all works again. Cool.

Categories: technology
Posted by diego on May 3, 2005 at 1:38 PM

search and stuff

As I'm "walking my way back" into the blogsphere, I've been reading some of the stuff I missed the last few weeks. An interesting one was this post by Mike on the "Recent Innovations in Search and Other Ways of Finding Information" Panel. Very detailed notes, and lots of cool info there.

Ah, search, search, search, search. It's the hot thing again. And I keep wondering what kind of context information we can get automagically, beyond what cool tools like Yahoo's My Web or Google's different take of personalized search provide. And I'm not talking about the semantic web either, rather, the extraction of semantics from what's already out there, no fancy new tags required.

Yep, that's nothing new. I'd bet everybody in search is chasing that eluvise goal. Makes for interesting thinking sessions though. :)

Categories: technology
Posted by diego on May 2, 2005 at 11:25 AM

Sixfoo! 660

What social networks would say if they could talk: Sixfoo! 660.

Hilarious.

Categories: technology
Posted by diego on April 9, 2005 at 1:03 PM

my stereo system, circa 2005

sound.jpg

When I was in the US last I had a 5-disc Technics CD changer, and the main reason I had gotten that was that even then I resisted the idea of having to switch CDs every single time I wanted to listen to something different. So one of the things I got recently at the Apple Store here in Palo Alto is a set of JBL Creature II speakers. Connected semi-permanently to the iPod's dock, I drop the iPod on it when I get home at night and I can listen to any of my music. I know this sounds kind of obvious, but actually doing was a pleasant surprise at first. All my music in a tiny package that can be easily moved around the house (if necessary), or taken on the road. On top of which, the Creature speakers have really excellent sound, and they're not that expensive at around $100. This idea of data portability is partially what I wanted for my dream portable of 2005.

This post reads like an ad. Oh well. Call me a satisfied customer.

PS: Also cool: at night, the green light under the satellites makes them appear to be tiny ghosts, as in old-fashion-bed-linen-covered ghosts, hovering motionless over the carpet. :)

Categories: technology
Posted by diego on April 5, 2005 at 8:04 AM

testing y!q "search in context"

yq_hed.gif
I'm going to be testing y!q search for the next few weeks and see how it goes (thanks Jeremy for the invitation). It's an interesting concept---it's been out for a few weeks I think though. You set a few words or a paragraph and you get a search related to the text. The more text you search the less relevant the results get, unless for some reason the text is very focused on a particular topic. In my case, I'm using weblog post titles as context, since I generally have titles that are fairly descriptive. I tried using post summaries but the results weren't as good--what with my tendency to start (and sometimes to continue) off-topic and all that.

Another thing I'm doing this week, when I have some free time :), is looking at the Yahoo! search APIs and related stuff. Should be interesting.

PS: I've also taken a further look at A9's OpenSearch. When I first noted its release I thought that A9 was providing their own results in RSS as well as aggregating others, but that's not the case, or at least I haven't been able to find how to do it. Although the "search exchange standard format" idea is cool, it's a bit weird of A9 to not adhere to their own output standard. Maybe this is something that will be added in the future (one hopes...)

Categories: technology
Posted by diego on April 5, 2005 at 7:25 AM

the problem with scoble's linkblog

While I enjoy perusing Scoble's linkblog when I have time (there's pointers to a ton of interesting stuff in there) I have not been so thrilled about his full-republishing technique. In my opinion, the question who exactly created the content is going to be slightly confusing for someone arriving there from a search engine (this in particular for people that don't yet know what blogs are, much less linkblogs).

Even if it was obvious though, the fact that he is republishing articles/posts wholesale without explicit permission means that a reader that would otherwise end up in my blog suddenly has no reason to do so. I have avoided commenting publicly on this, waiting to see if it changed, but it hasn't.

For example, check out his reposting of my take on AJAX. It's a long post (something relatively common for me) and by the time you scroll down to the second paragraph, you have forgotten that URL at the top. Many people will just get to the end, and move on to the next linkblog post.

Republishing content wholesale without permission is a bad idea. And a linkblog is supposed to be made of links, not full posts.

Robert, I suggest you simply post links and titles, rather than full posts---at most, a 50-word snippet or comment would do (similar to what Kottke does for his linkblog posts). If you think that's unreasonable, I'd ask you to remove any posts of mine that you may have republished over there and to avoid republishing other posts in the future. Thanks. :)

Categories: soft.dev, technology
Posted by diego on March 20, 2005 at 6:09 AM

opml, icons, and the nooked rss directory

A couple of months ago I was looking for options for identifying OPML output while going through the design stage of the Nooked RSS Directory.

The question was: if the white-on-orange XML icon is a common way to identify RSS feeds, what's the equivalent for OPML?

One answer was to use the same icon, which is in fact generic, to represent OPML as well. This solution, however, can be used when you have either one type of output or the other, but not both at the same time. Another possibility was to use one of the many different icons that turn up in a Google Image search for opml. Another option was of course to create an entirely new icon. The problem was to come up with something that a) wouldn't be unfamiliar to users, b) would be unobtrusive and c) would maintain the value of the XML icon for the RSS feed.

On one hand, the line was crossed long ago when different applications overloaded the XML icon, and even its Look and Feel, while changing its contents (Yahoo! for example has modified the original icon here and they also have a similar-looking icon with "RSS" instead of "XML" to subscribe to Yahoo! Groups, example -- there are many others, I'm not singling out Yahoo! in particular). On the other hand, there's no need to create even more confusion. Dave has advocated the use of the XML icon for the appropriate XML output for a page (be that RSS, OPML, etc), and, maintaining the value of the icon by avoiding changing its contents while keeping the look and feel intact. After some thought, I concluded that in this case the second approach made more sense: barring a particular design or business need (for which there are many good examples, but that weren't a factor in this case) simplicity was the best option. Additionally, avoiding possible user confusion that would result from a non-standard icon is definitely a good thing.

So what I ended up doing was to maintain the common XML icon to point to an RSS feed (the most accepted use of it by far) and to link to the OPML using a simple text link, enough to be unobtrusive while remaining usable for those that know what they're looking for (after all, not everyone knows what "OPML" is). As it turned out this was a fourth option: have no icon at all.

For example, check out the directory's Arts & Humanities page. The same page can be viewed as OPML and as RSS. (The link to the OPML view is still there, to the right --- Nooked replaced the link that directs to the RSS view with a small ad after I delivered the app, so essentially imagine that the white-on-orange XML icon is where the Nooked ad is).

Interestingly, this solution was also pretty good in terms of matching the functionality needs that we had (unobtrusive and at the same time easy to identify for experienced users).

Related: there's some cool stuff that can be done by using the OPML output as a start, for example, reading the OPML of feeds listed in a category and presenting a full RSS data view of them (different from showing the entries of the directory in RSS, which the directory does) with little coding. Then the loop could go on by creating OPML views of the category feeds, and so on... interesting possibilities for meta-aggregation and possibly other kinds of data processing/analysis.

Categories: technology
Posted by diego on March 18, 2005 at 8:45 AM

gumstix

I have absolutely no idea what I'd do with a bunch of Gumstix Waysmall systems, but, like Cringely, I keep thinking that there's something cool that one should be able to get them to do. A mini-cluster, connected via bluetooth! Right, but for what exactly? Anyway. Certainly something to keep in mind for when I need a swarm of tiny computers to do something. :)

Categories: technology
Posted by diego on March 15, 2005 at 9:50 PM

(un)structured

Matt points to a new wordpress plugin for structured blogging. Kewl. Reminds me of.

Then, completely unrelated to this, but way more fun is Ted's pointer (originally from Boing Boing) to LoTR in l33t:

**The white light fades, revealing Gandalf
Aragorn: "WTFOMGBSHAX!"
Gandalf: "Teh balrog was noob. He got pwned. I can dance all day, I can dance all day!"
Gandalf: "Boom! HEADSHOT!"
Gandalf: "I R GANDALF TEH GREY!"
Gandalf: "I R GANDALF TEH WHITE!"
Gimli: "FFS, make up your mind! ><"
To which one can only say: LOL!

Then we've got Andrew Sullivan who thinks that society is dead. Why? Because people in Manhattan listen to music on iPods rather than to the noise of traffic and such. Right. Lighten up, dude. Music players have sold, since inception, maybe, what, 100, 200 million units? There are about 6.3 billion people in the world without music players. Maybe society isn't dead quite yet? More than a billion of them don't even have running water, but they haven't "fallen under the spell of iPods" either. So it ain't all bad...

iPods and blogs. Now, that is something that should keep you up at night.

Woody Allen comes to mind: "What if everything is an illusion and nothing exists? In that case, I definitely overpaid for my carpet."

Also, Adam Curry: "Veni, Vidi, Velcro."

While working, I've spent half a day listening to U2, the other half to Sinatra. Both rock, even though Sinatra didn't. Language is funny that way. If I was really feeling recursive, I'd say funny, not ha-ha funny (like this Bono vs. Carly post is), which in itself is funny, but not...

Well, what do you know, I guess I am feeling recursive.

Categories: technology
Posted by diego on March 10, 2005 at 12:08 AM

trackety track

[via Mitch]: Tracking PCs everywhere on the Net (paper):

The technique works by "exploiting small, microscopic deviations in device hardware: clock skews." In practice, Kohno's paper says, his techniques "exploit the fact that most modern TCP stacks implement the TCP timestamps option from RFC 1323 whereby, for performance purposes, each party in a TCP flow includes information about its perception of time in each outgoing packet. A fingerprinter can use the information contained within the TCP headers to estimate a device's clock skew and thereby fingerprint a physical device."

Kohno goes on to say: "Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall."

And the paper stresses that "For all our methods, we stress that the fingerprinter does not require any modification to or cooperation from the fingerprintee." Kohno and his team tested their techniques on many operating systems, including Windows XP and 2000, Mac OS X Panther, Red Hat and Debian Linux, FreeBSD, OpenBSD and even Windows for Pocket PCs 2002.

"In all cases," the paper says, "we found that we could use at least one of our techniques to estimate clock skews on the machines and that we required only a small amount of data, although the exact data requirements depended on the operating system in question."

Wow.

Categories: technology
Posted by diego on March 8, 2005 at 11:33 PM

the nooked rss directory

logo-nooked.png

So, last week Nooked publicly released the application that I wrote for them during late December and January: The Nooked RSS Directory.

The directory is an interesting app: basically a resource of corporate RSS feeds (including Podcasts). Data is still being added, so there are some categories that don't yet contain many entries. One of the points of this particular app is that there is "editorial control" (similar to the Yahoo! directory) and so a level of relevance should me maintained throughout the entries.

While the nature of blogs (and the web itself) makes centralization difficult to maintain, directories are of course useful resources, and in this case, for many people that are only now approaching RSS from a corporate communications point of view, the directory would be a useful resource to get started and find information on the companies or products that they are interested in, and they could submit their own feeds.

It took a bit longer that originally planned, but it's great to finally see it deployed. The OPML view of the listings has some interesting consequences--but that's for later! :)

Categories: technology
Posted by diego on March 1, 2005 at 9:02 AM

outlets and high speed connections

One thing that I had forgotten (or probably repressed :)) about the US: there are so many outlets in apartments! Outlets everywhere, and lots of them. I may still need expansion cords, but not so far. It's great! Such a difference from my outlet-challenged apartment in Ireland.

Another thing: happiness is a 4 Mbps connection at home. I can't say I miss my wimpy 512/128 kbps DSL in Ireland. At all.

Also, I had Internet connection in the apartment before I had phone service. An Ethernet port in the bedroom and another one in the living room.

If that's not a Silicon Valley experience, I don't know what is. :)

Categories: technology
Posted by diego on February 26, 2005 at 6:54 PM

how to reboot an iPod photo

One of the things that happened while traveling was that about an hour into the LHR-SFO flight my iPod photo went nuts and started just to skip all songs without playing them. I came to the conclusion that it needed a reboot, but not having a notebook with me (I'm just carrying my data on a portable hard drive) I couldn't connect it and reboot it that way. I assumed (correctly as it turns out) that there should be a way to reboot it by using some key combination, but I didn't bring the iPod manual (such as it is) with me. I did have iPod: the missing manual but that was on one of the suitcases and so out of reach.

I tried a lot of options, mostly involving key combinations, but couldn't make it happen. Once I got here, I was able to check the book and the instructions to reboot both regular iPods and iPod minis were there, but, alas, no instructions to reboot an iPod photo. I decided to try the iPod mini way first (since it's a newer product) and it worked.

This is probably common knowledge to most long-time iPodders, but it was news to me. :)

Here's how to do it: put the Hold switch in ON, then back to OFF. Then press MENU and SELECT simultaneously and hold for a bit, and on release the iPod should reboot. According to the missing manual it is possible that the iPod is misbehaving due to battery charge confusion (not sure how that would happen, but anyway...) and for that you can plug it to the charger so that you can rule out that problem.

Speaking of iPod: did you know that European iPods can't play as loudly as American ones? Yep, the EU enforces "stricter" standard for maximum volumes in portable devices. Apparently, people aren't smart enough to know when they are blowing their ear drums off, or when they can't hear anything else except the music, and assumming they did want to do that, for some bizarre reason, then they shouldn't be allowed.

Now that is government regulation run amok.

Categories: technology
Posted by diego on February 25, 2005 at 6:58 PM

today's (re-)readings

Some articles I've been re-reading, in no particular order:

If you're wondering why these articles, it's simple: as I've been, in the last few days, trying to impose some order on the unremitting chaos that is my apartment, I've found articles, books, and references that I had read long ago and "shelved" somewhere. Some of these, as I find them, I set aside and read again. Hence the random nature of the list. :)

Categories: technology
Posted by diego on February 13, 2005 at 11:57 AM

a tool's influence

I keep seeing flashes of text, sentence-bursts that I have the impulse to blog and then, inevitably, evaporate in a "nah, too short." I have no doubt that in my case not just the tool (movabletype) but also my weblog's design have influenced me to go for longer, article-like posts rather than the shorter posts we see elsewhere. Similarly, weblogs based on other tools (most notably Radio) tend to be more of a mix of snippets and medium-length posts with the occassional medium length article and only every once in a while a really long post...

So what I've been wondering is, how to merge these two? Not just in terms of the tool (currently no tool seamlessly moves from one end of the spectrum to the next, at best, some provide plugins to deal with linkblogs, etc, and Radio has both "posts" and "stories" but the granularity isn't enough and stories are outside of the normal flow of the weblog). Also, in terms of the UI, navigation, and content evolution. A newspaper view comes to mind.

I've been looking for something entertaining to do with PHP. Maybe this is it.

Categories: technology
Posted by diego on February 13, 2005 at 11:42 AM

tag surfin'!

Yesterday Russ "officially" announced a service that he and Anthony are developing, called tagsurf, a "tagging-enabled hyperforum." Russ showed me one of the very first revs when I was visiting last month. It is a very cool idea: create loosely-coupled threads (or "forums"), where the coupling is determined by tags, the new new thing that's been making the rounds for the last few weeks in the blogsphere (this Salon article has a good summary on tags). Because you can attach any number of tags to a message, you are actually enabling multi-dimensional browsing on topics, for very little extra effort. (MetaFilter started doing something similar in January).

Anyway, go and give it a try. And congrats on the release guys!

Categories: technology
Posted by diego on February 10, 2005 at 2:58 PM

the new toy

ipodminisilver.jpg

Thanks Feedster!!!!

PS: Funnily enough, I got the iPod Photo exactly one month ago, on January 9th. I should probably decree the 9th to be "iPod day" on d2r or something. :)

Categories: technology
Posted by diego on February 9, 2005 at 12:58 PM

so THAT'S why!

I had a sequence of reactions to this article in the New York Times which deals with Google's recent approval as an Internet registrar.

First, the obligatory rolling of eyes at people "wondering" and "worrying" and "speculating" about every single move of Google. Let the guys do their work in peace ok? If it's world domination, then fine, let them try.

Second, was a raised eyebrow as I read the following:

Eileen Rodriguez, a Google spokeswoman, hardly quelled the speculation by explaining that the whole thing was really a learning opportunity for the company.
To this, I followed with an interested (muttered) "A-ha". But then the actual quote went on:
Google "has become a domain name registrar to learn more about the Internet's domain name system," she said recently in an e-mail message. "While we have no plans to register domains at this time, we believe this information can help us increase the quality of our search results."
To this, I just laughed out loud, quite literally, for several seconds. I still giggle at it every time I read it.

Come on boys! That's the line you feed to your spokespeople? "We became a domain name registrar to learn more about the Internet's domain name system"?!?!?? Yes!

LOL LOL ROFL!

Maybe this is practice, you know.

When Google announces they are releasing, say, a web browser, they will say "We are not interested in the browser business, this is just to learn more about web browsers and stuff."

Then, if they want to release an Operating System: "We are not interested in OSes. This is just to learn more about computers. And stuff."

And Larry or Sergey will show up and whisper something in the ear of whoever is giving the interview, and then they will add: "Right. To improve search results too. You know?"

Cue Mr. Burns: "And remember... a shiny new donkey for whoever brings me the head of Colonel Montoya."

And still laughing... :)

PS: Before anyone starts "explaining". Yes, I know that some of the info available only to registrars, and that can help, etc, etc. I just find the spokeperson's choice of words quite hilarious (aside from imagining the innocent look on their eye while they essentially say, straight-faced "Thinking of a product that is not pure search engine? Us? Noooooooooooo!").

Don't you? :)

Categories: technology
Posted by diego on February 7, 2005 at 1:26 PM

microsoft natural keyboards: evolution or devolution?

msk.png

I am, like many of us, very specific about certain things in my work environment, and there's stuff that I have "requested" (read: demanded!) everywhere I've been to in the last few years. One of those things is a Microsoft Mouse. The other is a Microsoft Natural Keyboard.

Yep, that's two Microsoft products I actually like a lot (the other main one is MS Bookshelf, which sadly got discontinued in 2000 and got swallowed by Encarta, which is ok but too big for basic dictionary/thesaurus needs).

I got a Natural Keyboard when it was first released (it was one of the first USB keyboards) and then got a new one in '99 when MS released the Microsoft Natural Keyboard Pro. That very keyboard has been with me since then in every machine I have at home, crossed oceans back and forth and been in as many cities as I've been, and it's been a rock. Nice feel, good construction and key placement. I particularly like that the keyboard is huge, and since I have big hands I feel comfortable with it.

I have yet to meet another person that feels as comfortable as I do with this keyboard, I wonder if I'm the only one buying them. :) But my problem is that in recent years, as the new Wireless versions have appeared, Microsoft has been "compressing" the width of the keyboard, with the result that even the Pro version compresses the Insert/Delete/Home/End/PgUp/PgDn key set from a 3x2 configuration to a 2/1/2 configuration (the insert key is gone, the alignment is vertical instead of horizontal, and the delete key is two keys tall).

What truly drives me crazy about this is that there is no option to get the old keyboard layout (and don't even get me started on the crosshairs cursor keys of the Natural Keyboard Elite! For a while it seemed that every new keyboard came with a different key layout). You either get the Natural Keyboard layout with vertical keys over the cursors (and compressed design) or a regular keyboard layout, with more space.

Also, every new version keeps adding weird function keys, integration and whatnot. It's starting to become a problem, like the million features in Microsoft Word that no one uses. Wake up MS! You had a great product with the basic Natural Keyboard and regular keyboards, along with Intellimouse. Stop adding buttons and lights and gizmos. Just make a good, simple keyboard. Notice how Apple keeps going for simplicity? Try that for a change!

So last night I "retired" my old set of Keyboard/Mouse and now I've got a new one, the Wireless Optical Desktop Pro, which is as close as it gets to the old version. It's the Natural Keyboard, plus the Intellimouse explorer, but alas bluetooth versions don't yet come in the "Natural" split-keyboard format.

Here's hoping that when they do, Microsoft will restore the standard Insert/Delete/Home/End/PgUp/PgDn 3x2 key arrangement over the cursor keys. And enlarge it a little bit. And remove some (just a few) of the crazy functions they keep adding to the keys. That, and I'd be happy.

PS: Since I'm asking... also, there's a trend of designing Mice so that they adapt ergonomically to the hand. This is fine, but it becomes a problem for people like me, who frequently switch the mouse hand throughout the work day. I'll probably have to end up working with two mice at once. :)

Categories: technology
Posted by diego on February 6, 2005 at 1:13 PM

russ @ yahoo! v2

Russ is now full-time at Yahoo! Congrats!

Categories: technology
Posted by diego on February 5, 2005 at 4:29 PM

what I'll get... right after I get a powerbook

ilap_cushion.jpg

The iLap. Not expensive, and very cool (literally and figuratively). Even if I don't get a powerbook soon, I'd still want one of these for the Thinkpad, assuming that it works fine with it that is. [via Tom Yager].

Categories: technology
Posted by diego on February 5, 2005 at 11:20 AM

funny stuff

SF Gate: Why does Windows still suck? or "how I learned to stop worrying and appreciate the 4 minutes my windows computer works without being infected". (I found this through slashdot, but I'm too lazy to look for the link).

The Onion: Google in 2005, top secret plans from the googleplex for this year, including "Enter beta testing of Google Apartment, which will let users search for shoes, wallets and keys", "Patent the idea of looking for something" and "Occasionally shut down so people stop taking them for granted."

Hee-larious. :)

Categories: technology
Posted by diego on February 5, 2005 at 11:13 AM

firefox weirdness

I've been using FireFox for quite a while now, but suddenly something strange started happening: if I'm editing something on the browser (say, a post), and I do a "preview" I used to be able to press the back button and continue editing. Now, though, when I press the back button the form contents are wiped out. I can press "forward" though and the contents are re-posted, so they're still in there somewhere. Strange. I'm pretty sure I didn't change any settings on the browser. I wonder why this started happening...

Categories: technology
Posted by diego on February 2, 2005 at 11:19 AM

the arrival of trackback spam

At least in this weblog. This morning there were some 50 spam trackbacks to different entries.

I've been waiting for this to happen--until today trackback had never been abused massively. But it was clearly just a matter of time, particularly since trackback allows to set snippets and in many weblogs they are rolled in with the rest of the comments.

Conclusion: I'll do what I did for comments: change the 'allow trackbacks' flag. Luckily I switched to MySQL not long ago, making it easier to access the raw data, since Movable Type still doesn't support a "close all comments & trackbacks in entries after this date" feature (and for me using SQL is easier than using a plug in).

Anyway. Another line crossed...

Update: Very strange. A couple of hours after closing the trackbacks and rebuilding the weblog, spam restarted. I verified that the entries had trackbacks closed, and yet spammers were able to post trackbacks anyway. I tested sending a trackback myself to an entry which was closed, and correctly got a message "Ping 'ENTRYID' failed: This TrackBack item is disabled." The trackback was not received. I have to assume that they have found a way to post trackbacks even if they are closed... (some unknown hole in MT's trackback implementation? Or maybe the additions were in a queue somewhere and got in anyway, since they were so many?). As a temporary measure, I've changed the name of the trackback script, so they shouldn't be able to post to the URL they have crawled.

Another update: Definitely some form of queueing was at play. I have done some more experiments and looked at my logs and enabling/disabling the trackback script returns the spammer (which is still going at it) a 404 and a 500 HTTP Error alternatively, so the check is working. Leaving the old script disabled is better, obviously, since it doesn't hit the MT db for checks.

Categories: technology
Posted by diego on February 1, 2005 at 10:19 AM

another microsoft gem

I read this first over at the Wall Street Journal but then found a CNET article on it.

Last year Microsoft actually lost a court case, against the EU, for predatory practices (sounds like something from the Discovery Channel, doesn't it?) in the market for media players/formats. Essentially the EU forced Microsoft to distribute Windows without forcing Windows Media Player to be bundled with it, essentially allowing customers to choose.

So what did MS do? It said "fine". Then it went and called this product "Windows XP Reduced Media Edition". Then it also removed the ability to do some basic things, such as playing music CDs (something that even Windows 95 could do).

If you are a consumer, and are offered, for the same price, a machine with "Windows XP Home Edition" and another with "Windows XP Reduced Media Edition", which one are you gonna get? I can hear the Shadow-like laugh in the background when the MS representative says (in the article) "We believe this name complies with the commission's orders".

You know, it's stuff like this that pisses people off about Microsoft. Maybe you could argue that it's MS's "Darwinian" attitude that allows it to post record profits and market gains, even when most in the tech industry consider it passé.

But isn't there a point when MS just has to say, "okay guys, we've misbehaved, how do we make things better?" instead of fighting tooth and nail for every single tiny scrap of whatever. I have stated more than once here that Microsoft's desktop monopoly is not necessarily illegal in itself, it's the predatory practices and the illegal actions taken to maintain and defend that monopoly that are the problem (sadly, it's also in question whether the monopology would have gotten this far without those illegal actions). Maybe if MS started to differentiate the monopoly itself and what they do to maintain it, then things would get better, don't you think?

No, I'm not holding my breath. But hey, it's Saturday morning. I thought it was Friday. Let's give the lad some leeway.

Bonus: a post from November 2003, where I wonder if WinFS will be delayed again ("never say never" eh?), some more day-dreaming about MS embracing, or at least not actively attacking, the Web (sure), and some Descartes thrown in for good measure.

Categories: technology
Posted by diego on January 29, 2005 at 12:55 PM

spamassassin and comment spam

I have the blog set up (as most others I suspect) to email me when a new comment is pending for moderation. Main reason for this being, of course, comment spam. Now I get lots of comment spam, but it's easy to ignore.

But I also have spamassassin set up to dump any email that scores 5 or higher (I think). And SA learns. Put two and two together and it turns out that after a while SA apparently came to the conclusion that any email that came from comments on the weblog was spam. Now I don't get notified of any comments at all. I'll have to dig into the SA config and figure out which rule is triggering this, or how to override it. I'm not even sure how it's separating them, since the trackback notifications are still arriving, and they have a similar message pattern.

You know, I'd be pissed at SA if it wasn't that it's pretty much on the mark in its decision. Hopefully eventually we'll have SA connected to something like Cyc so that it knows a bit more about the world. Now wouldn't that be a treat.

Categories: technology
Posted by diego on January 29, 2005 at 12:41 PM

old shoe, new shoe

For some reason I just remembered an exchange between Dennis Leary and Willie Nelson in "Wag the Dog" where Dennis Leary's character utters those words. That's how random my brain is.

Dylan, a good friend whom I stayed with in California (the first few days I was there at the beginning of January), has retired his weblog. He has a number of excuses (I mean, reasons!), which I want to ignore and hope he'll get back to blogging at some point, even if haphazardly. Dylan's foray into blogging is one of the reasons why I'm blogging, so at the very least, here's a digital Guinness raised in Warmbrain's honor. :)

On the new shoe front, Martin, another friend, has finally acquiesced to my constant babbling about blogs (and my usual "start a weblog! now!") and started a weblog :). He's one of the best developers I know, so he's sure to post some interesting stuff. Check out his wide range of interests (and knowledge) in some of his initial posts, from Keystroke emulation in Win32, to Java Swing Tips to Stack trace annotations. If my brain wasn't fried (18 hour day so far and all) I'd make some interesting comments on those, but for the moment I'll just link to them. Welcome Martin! :)

Categories: technology
Posted by diego on January 26, 2005 at 12:05 AM

ipod photo: first impressions

ipod-dock.jpgOkay, so I was going to do this all at once but I realized that it would end up being an impossibly long post. Better to split things up. This is stuff that I wrote down as I started using it, about a week ago.

but why oh why?

While you'd think this question has an obvious answer (e.g., hype, the "cool" factor, "just because"...), in my case it's not so obvious. You see, I generally prefer solid-state electronics over anything with moving parts, and that meant that for a while I was skeptical of an iPod for myself (even as I knew the value they had for most others). It's one of those techno-snobbish things.

In any case, what happened is that I realized that over time I started to view my digital music collection as one "set" or "unit" rather than as disjointed playlists. No doubt iTunes has a big role to play in that, but it's a natural evolution IMO. Once you have 20 gig of music on your machine you get used to selecting anything, anytime. And once you're used to that, it's pretty hard to use puny 1/2 gig flash players or whatever: you never know what to choose "for the road". My first MP3 player was the original Rio 300, then a Rio 600, then a Rio 800. I knew the flash-player use case, and in the last couple of months it became obvious that it just wouldn't cut it anymore.

In fact, not even the iPod mini (or similar) would cut it. I wanted a player that would let me take all my music with me. So last week as I walked around the Apple Store in Palo Alto, I kept drifting back to the little pile of iPod boxes in one corner. I knew that I was getting an iPod mini from Feedster soon, but I thought that they would be useful in different cases.

Eventually, I gave in. :)

Package, installation

Everyone gives points to Apple for design, but not many comment on the packaging, which I always think is an integral part of the "experience". The iPod is no exception. The little cubic box opens like a book in half and for some reason by the time you get to pulling cables out of it, you're already smiling. It's design genius.

The package includes pretty much everything you need to get started, and the included carrying case is good, but not great since you can't use the controls with the iPod in it. But still useful when the playlist has been set, or if you get a remote. As far as accessories, the iTrip (FM radio broadcast direct from the iPod) looks pretty sweet, but I didn't get it. :) Also the docks with built-in speakers.

Set up is pretty easy--the only wrinkle is that it piles on a couple of installations of different things (iTunes, Quicktime, iPod updater) and it's less simple than it should be, but still manageable. The fact that iTunes still has to be installed separately is kind of a pain, even afterwards updating iTunes is still a full download and install. Maybe it's time for Apple to come up with a Windows version of OS X's Software Update feature?

Taking it for a spin

Anyway, you're up and running fairly quickly. Copying songs over USB isn't bad (and USB2 is great, I haven't been able to try FireWire but I have no doubt it will also work well). In the case of plain-old-USB, you just have to wait a couple of hours until all those GB are transfered.

One slight issue with the dock (which comes bundled in the iPod photo, I think other models don't include it, but I might be wrong) is that while it's docked you have this message on screen of "do not unplug" or something. Which is a problem, because you don't really think about looking at the iPod's screen before pulling it out of the dock. I think that this doesn't necessarily create problems, but it might when a transfer hasn't finished... maybe better sync in the software (so that the device is released as soon as possible) and some notification at the OS level for moments when you absolutely-positively-can't-unplug it would work.

The headphones are fantastic. I am not a fan of earbuds, but the iPod's are really comfortable and sound incredibly good. Apparently the drivers are built out of Neodymium, as opposed to the more common aluminum, cobalt, or ceramic. I definitely think that there's a difference. In any case, difference or not, they sound great, and they're the first earbud earphones in years that I've used for more than a few hours without them bothering me.

Battery life is pretty good, I've definitely gotten over 10 hours but I'm not sure if I reached the advertised 15 for this model. This is also hard to gauge since battery usage varies wildly with use (ie., times the hard drive has to spin up, backlight on, etc). But I definitely had no problems with it this monday/tuesday with a nearly 20-hour trip. The battery was almost exhausted at the end, but it still had some juice.

So far so good

So, quick conclusion: it's a pretty great product. Already I've listened to stuff that I hadn't listened to in a while, just because the opportunity doesn't present itself (or rather, when it does, you're nowhere near the music you'd like to hear). I gave the output-to-TV feature (with included AV cable) a quick try and it's great, but that, and the photos (Along with the extras, calendar, notes, games, etc. :)), is something for next time.

Categories: technology
Posted by diego on January 20, 2005 at 8:56 PM

happy birthday russ!

Today is Russ's birthday! Go send good wishes to his weblog. :)

Categories: technology
Posted by diego on January 20, 2005 at 5:11 PM

iCaved

ipod.png
Yes. Yes I did. Review upcoming. :)
Categories: technology
Posted by diego on January 19, 2005 at 8:41 AM

new design

I have succumbed to the temptation! Yes! A few weeks ago I spent some time playing around with new designs, but none of them really convinced me. I was looking at them again this morning, though, and I realized that one of them wasn't too bad, and it could be a good starting point, considering also that the previous design was nearly a year old. So here it is... let's see how it works out. I still have some pages to update, particularly for the individual entries.

The new design is not that different really, mostly it reduces clutter and organizes things a bit (and gets rid of time-dependent elements in individual entries). One "feature" I like about it is that while it uses CSS, there is some structure maintained through tables (which the CSS kinda overrides). Purists will probably scoff at this, but I did it to maintain the design parameters when looking at the site with the venerable Lynx and older browsers. Lynx-compatibility is a crucial feature! Heh.

PS: you might need to do a hard-refresh in your browser to reload the CSS. Let me know if you see problems in a particular platform. Thanks!

Later: Holy Cow! Now that each individual page doesn't include links to archives and such, the rebuild process is about two orders of magnitude faster. It's now rebuilding some 20 pages per second, whereas before it used to take 5 seconds per page (Rebuild of a single entry on its own takes longer because it also rebuilds all associated indexes, but it's also much faster!). I've just rebuilt the entire site in like 4 minutes! That was all I had to do to get it to run faster? This makes me think that maybe the default entry/daily/archive templates for MT should not include so many dynamically generated links. Another lesson there somewhere...

Categories: technology
Posted by diego on January 3, 2005 at 2:35 PM

my dream portable for 2005

frontview.png

Okay, since I'm tired of waiting for someone at some PC manufacturer to come up with what I really want, and since they don't appear to have developed mind-reading yet, I thought I'd spec it out here, including some quick sketches :) (click on the images to see a larger version).

I often think about what I'd really like in a portable and surprisingly my thoughts have been pretty consistent of late. Before you start saying that this already exists, read through the specs--the devil is in the details!

The basics

To start, I don't want anything that doesn't exist today. It's all a matter of packaging and connectivity. Fairly obvious things that my dream portable should have are:

  • Decent processor, 1 Gig of RAM.
  • (Relatively) small built-in hard disk. A 40 Gig 2 inch platter (like what the iPod has in its drive) will do.
  • High-res TFT (Widescreen would be nice but not necessary).
  • WiFi, Bluetooth, Ethernet, a couple of USB2 connectors, and a couple of FireWire connectors, although I'd be happy with at least one of each of those. Throwing in GSM and CDMA chips for cell connectivity would be a plus. :)
  • Touch sensitive screen (ie., Tablet mode, including display rotation).
  • 2.5 pound basic package, without keyboard, 3 pound package with keyboard, and 4 pound max with the storage & adaptor (I'll get to what I mean by "storage" in a bit). Keyboard should be separate from the machine, connected to it via bluetooth, with built-in trackpoint or similar.
Some issues with this machine is that as you get lighter you need more on the base, which is why in my design the base flips and locks with the hinge to provide support for the screen.

So far I imagine few people would disagree with me, except maybe on the size of the built-in disk. Ah, but here's where what I want differs from what's out there. I separate between base storage and personal storage. Let me explain.

The Key: Storage

Base Storage is what's required to run the OS and applications. It's not your data, it's something that is machine-dependent (mostly) and relatively stable. For this, 40 gigabytes is pretty good for today's needs.

Personal Storage is for my data. These days, when we're running around with tens of gigabytes of MP3s, videos, and such, not to mention the rest of the stuff (my personal data store, not including media, is about 4 gigs), we need a lot more than what portables require, additionally, synchronization between our devices is a nightmare. Data ends up duplicated for no reason. So what I think is that we should decouple our storage from everything else, and that's where things get interesting.

This personal storage device would be reasonably big, 250 gigabytes minimum, maybe reaching into a terabyte by the end of the year on single platter drives (on multiple 3.5 inch platters we're probably there by now). Yes, I know that the idea of "Brick PCs" has been floated in the past, but I don't want a PC on a brick. I just want a drive on a brick.

More on the 'Storage Brick'

So were does the personal storage go in my dream laptop? Look at the back view:


The personal storage unit "docks" into the back. That is, instead of docking the machine into an expansion unit, just dock the drive into the machine!

This approach has several advantages. First, I should be able to selectively synchronize some important data into the permanent storage of the portable, for when portability is more important and I don't want to carry the storage brick around. Second, I should be able to dock the storage into my desktop PC, making it easy to move between machines. Third, using the PC's larger internal storage to automatically sync (i.e., backup) my personal storage brick with the local drive we'd get automatic backups! Yay! :)

Also, by splitting up the machine into more parts, you get more choice in portability. Need to minimize weight? Just take the screen and base unit and use it in tablet mode. Need more space? Plug in the brick. Easy.

What's crucial here is not the idea of portable drives, but the simplicity of "docking the drive" into the portable and into the desktop PC. Alternatively, instead of docking you could do a USB2 or FireWire connection, and handle the integration in software.

If we lived in a perfect world, we should be able to just buy a storage brick from one manufacturer, the base from another, the keyboard from someone else... but that's for later. All the technology required for this already exists. All that's required is the integration work, and, yes, the sync software for the drive dock would be a sticky point, but certainly nothing insurmountable.

Some day...

Categories: technology
Posted by diego on January 3, 2005 at 9:23 AM

how microsoft will take over the world (according to Cringely)

A couple of weeks ago Cringely was more locuacious than usual with his article "Between an xBox and a Hard Place." He is generally entertaining and well-informed, but sometimes his machinations show an excessive fear of Microsoft. Example:

Take a long look at xBox development, the evolving PC and consumer electronics markets, and Microsoft's own need for revenue growth, and figure what that means for the xBox 3, which should appear around the end of this decade. My analysis suggests that xBox 3 will be a game system that's also a media receiver and recorder and a desktop workstation. Not that you'd use one box for all three things, but that you'd buy three essentially identical boxes and use them for all three functions. And of course you'd buy extra units for kids and spare TVs, etc. In short, xBox 3 will be Microsoft's effort to extend its dominance of the PC software industry into dominance of the PC hardware, game, and electronic entertainment industries. At that point, even mighty Dell goes down.
While there is no question that xBox3 might be that and more, and that the IBM/Lenovo deal elements he discusses are interesting, the problem with his logic is that it assumes that nobody else does anything. "At that point, even mighty Dell goes down," because, you know, Dell will stand still for five years as Microsoft supposedly prepares to anihilate its business. So will anyone else, btw, including HP.

Right.

Never mind as well that the industry is becoming much more complex, with different products and price points, multiple devices (smartphones anyone?) so that the battle for xBox and PlayStation might be more of a sideshow than anyone anticipated. After all, my phone is with me 24 hours a day, while I might be in my living room only ten percent or less of that time. And a place in which, I might add, the TiVos of the world will be comfortably settled in by 2010. The problem with multi-function devices like that is that they have "multi-competition" too.

Or maybe someone will come out with a WiFi transmitter that can connect directly to the TV from the PC and bingo, you don't need to move the PC below the TV and paint it black, just leave the TV where it is, the PC where it is, and connect them, and everything else, wirelessly.

Oh, but wait, we're already doing that! And I think that Microsoft does understand it, certainly at some levels. So how much of the possible xBox 3 grand plans is just keeping up? Especially since we may not yet have WinFS by then...

Nevertheless: entertaining. :)

Categories: technology
Posted by diego on January 2, 2005 at 11:02 AM

ipod!

Btw, I forgot to mention this (or rather, I've been sort of 'off the grid' for a couple of days) but I won one of the iPods in the Feedster developer contest!

Many thanks to Feedster, and Scott. Very cool!

Categories: technology
Posted by diego on December 31, 2004 at 12:02 PM

a manifold kind of day

I'm about to go out for a bit, but before that, here's what I've been talking about for days now: the manifold weblog. (About my thesis work). The PDF of the dissertation is posted there (finally!), as well as two new posts that explain some more things:

as well as the older posts on the topic, plus a presentation and some code in the last one. Keep in mind that this is research work. Hope it's interesting! And that it doesn't instantaneously induce heavy sleep. :)

a subtle problem of frontpage

As an aside, I spent some time in the last couple of days doing a favor to someone who had created a website but wanted to make it look reasonably good.

Since the site had been created with Frontpage, I had to go through the usual rigamarole of removing the extraordinary amounts of garbage that Frontpage inserts into the HTML. This was problem number one.

The 'subtle' problem though, was the UI. In the process of changing the site I of course redesigned the navigation, but I realized that Frontpage was actually doing something pretty terrible: creating a bad UI.

Frontpage automatically manages the creation and maintenance of navigation on a site. You can create the site hierarchy and FP will maintain links, etc. The problem is that the UI that FP generates is hierarchical, and it doesn't really do justice to the multidimensional nature of hypertext. It is, pretty much, a directory in HTML form. In many default FP templates, sub-pages are generated with "Up" navigation links along with the rest, which is not only ridiculous with HTML but also bad UI practice because the navigation bar changes content for every page you're in.

So my question is: can't Microsoft fix Frontpage so that it a) generates simple, CSS-based HTML and b) that the default templates include well-designed hypertextual UIs, rather than what it does today?

Or does Microsoft need a Firefox HTML Editing app that will wake up the Frontpage team, just as Firefox itself has resucitated the IE team?

PS: Frontpage is actually a product that Microsoft acquired in 1996 when they bought a company called Vermeer Technologies. The founder of Vermeer, Charles Ferguson, wrote a book about his experience, from founding to acquisition, called High Stakes, No Prisoners, which is fantastic. Recommended.

Categories: technology
Posted by diego on December 20, 2004 at 5:58 PM

under attack

Through the last week the clevercactus site has been sporadically unavailable, and it's down right now. This means no web, no service, no emails getting through.

If you're trying to get through to clevercactus and can't please let me know through a comment or email to my personal address.

What happened is that we were attacked (I'm not sure when) and someone left a number of scripts there that are flooding the system (they do other things too, but at least one of them is clearly written simply to flood the network and disable it). This is something obviously intended to bring down clevercactus, not just a simple hacking. Why? What do they gain by bringing down the service of a small company that is going through hard times?

This kind of thing makes me sad, and is really discouraging.

I had this whole thing planned for today, getting the manifold site up and so on but now I'm going to spend time trying to see how to route around the problem for now until we can determine the extent of the hack. I don't even know how they got in yet--we constantly update our software with the latest patches. Needless to say, I'm seriously reconsidering the whole of the software I use and how to set it up so that this doesn't happen again.

Anyway. We'll see how it goes.

Categories: clevercactus, soft.dev, technology
Posted by diego on December 16, 2004 at 2:31 PM

manifold, the 30,000 ft. view

As a follow-up to my thesis abstract, I wanted to add a sort of introduction-ish post to explain a couple of things in more detail. People have asked for the PDF of the thesis, which I haven't published yet, for a simple reason: everything is ready, everything's approved, and I have four copies nicely bound (two to submit to TCD) but... there's a signature missing somewhere in one of the documents, and they're trying to fix that. Bureaucracy. Yikes. Hopefully that will be fixed by next week. When that is done, right after I've submitted it, I'll post it here (or, more likely, I'll create a site for it... I want to maintain some coherency on the posts and here it gets mixed up with everything else).

Anyway, I was saying. Here's a short intro.

Resource Location, Resource Discovery

In essence, Resource Location creates a level of indirection, and therefore a decoupling, between a resource (which can be a person, a machine, a software services or agents, etc.) and its location. This decoupling can then be used for various things: mapping human-readable names to machine names, obtaining related information, autoconfiguration, supporting mobility, load balancing, etc.

Resource discovery, on the other hand, facilitates search for resources that match certain characteristics, allowing then to perform a location request or to use the resulting data set directly.

The canonical example of Resource Location is DNS, while Resource Discovery is what we do with search engines. Sometimes, Resource Discovery will involve a Location step afterwards. Web search is an example of this as well. Other times, discovery on its own will give you what you need, particularly if the result of the query contains enough metadata and what you're looking for is related information.

RLD always involves search, but the lines seemed a bit blurry. When was something one and not the other? What defines it? My answer was to look at usage patterns.

It's all about the user

It's the user's needs that determine what will be used, how. The user isn't necessarily a person: more often than not, RLD happens between systems, at the lower levels of applications. So, I settled on the usage patterns according to two main categories: locality of the (local/global) search, and whether the search was exact or inexact. I use the term "search" as an abstract action, the action of locating something. "Finding a book I might like to read" and "Finding my copy of Neuromancer among my books" and "Finding reviews of a book on the web" are all examples of search as I'm using it here.

Local/Global, defining at a high level the "depth" that the search will have. This means, for the current search action, the context of the user in relation to what they are trying to find.

Exact/Inexact, defining the "fuziness" of the search. Inexact searches will generally return one or more matches; Exact searches identify a single, unique, item or set.

These categories combined define four main types of RLD.

Examples: DNS is Global/Exact. Google is Global/Inexact. Looking up my own printer on the network is Local/Exact. Looking up any available printer on the network is Local/Inexact.

Now, none of these concepts will come as a shock to anybody. But writing them down, clearly identifying them, was useful to define what I was after, served as a way to categorize when a system did one but not the other, and to know the limits of what I was trying to achieve.

The Manifold Algorithms

With the usage patterns in hand, I looked at how to solve one or more of the problems, considering that my goal was to have something where absolutely no servers of any kind would be involved.

Local RLD is comparatively simple, since the size of the search space is going to be limited, and I had already looked at that part of the problem with my Nom system for ad hoc wireless networks. Looking at the state of the art, one thing that was clear was that every one of the systems currently existing or proposed for global RLD depends on infrastructure of some kind. In some of them, the infrastructure is self-organizing to a large degree, one of the best examples of this being the Internet Indirection Infrastructure (i3). So I set about to design an algorithm that would would work at global scales with guaranteed upper time bounds, which later turned out to be an overlay network algorithm (which ended up being based on a hypercube virtual topology), as opposed to the broadcast type that Nom was. For a bit more on overlays vs. broadcast networks, check out my IEEE article on the topic.

Then the question was whether to use one or the other, and it occurred to me that there was no reason I couldn't use both. It is possible to to embed a multicast tree in an overlay and thus use a single network, but there are other advantages to the broadcast algorithm that were pretty important in completely "disconnected" environments such as wireless ad hoc networks.

So Nom became the local component, Manifold-b, and the second algorithm became Manifold-g.

So that's about it for the intro. I know that the algorithms are pretty crucial but I want to take some time to explain them properly, and their implications, so I'll leave that for later.

As usual, comments welcome!

Categories: science, soft.dev, technology
Posted by diego on December 3, 2004 at 12:41 PM

IBM puts the PC business on the block

It's all over the wire.

Aside from the historical significance of this, my first feeling about was a bit of sadness, and thinking "No!". My second feeling was surprise at feeling anything about a corporate acquisition! My third :) was a realization that if this happened Apple would be alone, as far as I'm concerned, in moving the ball forward on laptops. IBM was the king of laptop evolution in the PC side, and even though Dell and HP are respectable, they've never shown huge amounts of initiative there (HP has been much better in terms of its PocketPC work, or rather, the iPaq team from Compaq was and HP has kept the ball rolling).

So my thought had a lot less to do with the PC business itself than with the Thinkpad business. IBM desktops were always a bit clunky IMO, and, excellent keyboards aside, they didn't do much for me. No doubt the failed PS/2-OS/2 experiment in the late 80s (remember Microchannel?) had a huge effect on the vitality of the IBM desktop PC line for a long time.

But the Thinkpads! Almost all of my laptops have been Thinkpads. Best keyboards of any laptop. Simple, good design. Amazing reliability. I got a 560e in 1998 and in 2000 I gave it to my parents so they could keep using it. And it still works fine. The battery started failing two years ago. 4 years. No doubt we take good care of the machines, but still, that's quite a long time for these things.

More than anything, what will be missed the most will be the innovation that Thinkpads championed. While a bit dry in terms of design (certainly Apple has been far ahead of everyone else on that count, as usual), they've always moved forward in terms of functionality. Thinkpads where the first to include 10.4" color TFTs, introduced the trackpoint device (invented by IBM, the trackpad was invented by Apple), first notebook to include a CD reader, ultraportables (the 560), first to integrate DVD, and small but useful things like the "ThinkLight" the little light at the top of the LCD that illuminates the keyboard. That, plus what didn't make it in the long such as the amazing Butterfly keyboard or LCD projection that was done by removing the back cover on the display and then resting the now see through LCD on top of an overhead projector, and other things like the Thinkpad Transnote.

Anyway. Maybe IBM will keep part of research focused on that (I hope so!). But it seems more likely that from the point that this transaction happens (assuming it does) then Apple will be the main flag-bearer for innovation on notebooks.

Categories: technology
Posted by diego on December 3, 2004 at 10:11 AM

ads in rss - not as easy as it sounds?

Last week Jeremy was talking about ads in RSS and how it seems a foregone conclusion that they will, eventually, become the norm. I agree that this is more likely than not, but I doubt that today's web ad infrastructure (as understood by what Yahoo!, Google, et.al. do) will be used directly.

The reason why I say this was actually mentioned by Jeremy, but not explored. While talking about the options (full text with ads, summaries without), he said:

I don't want to have to choose between ad-laden full-content feeds and the pain in the ass summary only feeds. Anyone whose ever tried to catch up on their reading while on an airplane or train gets this.
The problem with ads in RSS lies in the second sentence: "Anyone whose ever tried to catch up on their reading while on an airplane or train gets this."

Many RSS readers are web-based, and those would always work for web ads (unless a plugin is added to stop them, see below). But many, many RSS readers are rich clients, and clients will sometimes be working in disconnected mode.

"Disconnected mode" throws a wrench in the ad-serving business model, by either preventing the download of the ad, or preventing clickthrough.

If that's the case, then how do you serve the ads? You could embed them into the content, sure, but then you'd have the problem of a) showing relevant/uptodate ads, b) measuring ad-views and c) allowing click-throughs, which are impossible while disconnected.

Someone might say that most people are wired most of the time, and so this problem is minimal. But I have no doubt that, were ads in RSS to become pervasive, rich clients would include a simple way of working in "disconnect mode" (and those that don't would fall behind those that do), not to speak of plugins that would surely be developed, both for clients and browsers, just like adblock exists for Mozilla.

If the readers were to be integrated into the ad serving-viewing-clicking cycle (keeping stats, allowing clickthroughs, etc), then maybe things would be closer to web ads, but who is to say that users will not flock to RSS readers that will support the "ad-free" mode? Or modify their ad-friendly readers?

So even though ads in RSS might be just around the corner, I'd bet that they (and the business model behind them) will have to change at least a bit--the current way in which web ads work probably won't be enough.

Categories: technology
Posted by diego on November 30, 2004 at 6:57 AM

looking for the next big thing

So. A week has gone by with no posting. Lots has happened, but more than anything it's been a time of consolidation of what had been happening in the previous weeks. First, the short version (if you have a couple of minutes, I recommend you read the extended version below): tomorrow is my last day working for clevercactus. And that means I'm looking for the next thing to do. So if you know of anything you think I could be interested in, please let me know.

Now for the extended version.

For the last couple of months (and according to our plan) we have been looking for funding. Sadly, we haven't been able to get it. This hasn't just been a matter of what we were doing or how (although that must be partly a problem) but also a combination of factors: the funding "market" in Europe and more specifically in Ireland (what people put money into, etc), our target market (consumer) and other things. Suffice it to say that we really tried, and, well, clearly it was a possibility that we wouldn't be able to find it.

On top of this, I haven't been quite myself in the last few weeks, maybe even going back to September (and my erratic blogging probably is a measure of that). By then I was quite burned out. Last year was crazy in terms of work, and this one was no different: between January and the end of July I only took two days off work (yes, literally, a couple of Sundays) and the stress plus that obviously got to be too much. I see signs of recovery, but clearly this affected how much I could do in terms of moving the technology forward in recent weeks. Since there's only two of us, and it's only me coding (my partner deals with the business side of things), this wasn't the most appropriate time to have a burnout like that. I screwed up in not pacing myself better. Definitely a lesson learned there.

At this point, the company is running out of its seed funding and we don't have many options left. Even though it's possible that something would happen (e.g., acquisition), what we'll be doing now is to stop full time work on the company, which after all won't be able to pay for our salaries much longer, and look for alternatives since of course we need to, you know, buy food and such things. The service will remain up for the time being, and I'll try to gather my strength to make one last upgrade (long-planned) to the site and the app, if only just for the symmetry of the thing. Plus, you can't just make a service with thousands of users disappear overnight. Or rather, you can, but it wouldn't be a nice thing to do.

Now I have a few weeks before things get tight, and I'll use that time to get in the groove again and hopefully find something new to do that not only will help pay for the bills but is cool as well. Who knows? I might even end up in a different country! As I said at the beginning, if you know of something that I might find interesting, please send it my way. Both email and comments are fine (my email address can be found in my about page).

In the meantime, I'm going to start blogging more. No, really. I have some ideas I want to talk about, and maybe I can get back into shape by coding (or thinking about) something fun and harmless.

Or, as the amended H2G2 reads: Mostly harmless. :)

russ@yahoo!

Russ will be consulting for Yahoo! starting Monday. Congratz!

Between Russ in mobile stuff and Jeremy in search (another cool move) you've got two big areas of Y! covered by great bloggers. (I wonder if someone is blogging from the services side...Y!Mail, etc.). Most excellent.

Categories: technology
Posted by diego on November 20, 2004 at 3:55 PM

the market of one

Tangentially related to my previous post, in terms of usage patterns, context, and so on, I was thinking of this notion I call "the market of one".

The market of one is... yourself. You (in theory at least:)) have the best insights on what drives you and not, what you like about something, use patterns, etc. It is the "eat your own dog food" concept but with some insight applied, and only for one person.

The market of one seems crucial to me when either a) your organization is small (large companies being able to create focus groups, commission marketing studies, etc, and then being able to survive massive failures of product lines) or b) when you're doing something completely new (when focus groups aren't much help--people generally react badly to the unknown).

It's not just using what you're creating, but also asking yourself: how much am I using it? Does it satisfy my needs? Why, or why not? And so on.

When the product doesn't exist, it is, I think, the "standard way of thinking." We project ourselves and our own needs and based on that we evaluate whether we think something's good or not. For example, a lot of the disagreement over web-on-mobiles usage mentioned in the previous post comes from people transparently applying their own use-cases to what they think the product is, and then extrapolating from there.

I use this idea all the time when thinking about a product, or when designing it. But I think I haven't been as consistent in applying it during and after development, when reality takes over the grand designs that are in my head.

Somehow, I think that I keep seeing the product that I know it will eventually be, rather than what it is today. That future-vision can become harmful if it blinds you to the problems that exist today. Something to pay more attention to in the future.

Categories: technology
Posted by diego on November 18, 2004 at 11:30 AM

the web is not the browser, a year later

About a year ago (a year and three days, actually) I wrote a post titled the web is not the browser. At that point the discussion was whether "RSS usage" was "web usage" or not. Russ posted his thoughts on the Mobile Web yesterday, and there's a bit of deja-vu there, but going in some new directions. So here's my thoughts, updated. :)

The Web isn't just HTML+HTTP

Russ starts:

Now, when I say "web", you know what I saying right? I'm generally thinking HTML over HTTP and though you could probably say there's a lot of "dark content" out there on the internet - like in email, etc. it's generally not publicly accessible. The web in 2004 is the lingua franca of internet based information, I don't think there's much argument on this...
Actually, I think there is some argument on this, and Russ and I basically discussed it in the comments of that entry of mine a year ago. Russ made exactly the same argument in his comment and I replied with this in another comment:
Russ: you say the web is HTTP + HTML. Okay. However, what you type in your browser is a URL. The URL is post-web (RFC 1738 is dated Dec. 1994), and as it says in the RFC: "The specification is derived from concepts introduced by the World-Wide Web global information initiative" [Just to clarify, I mean "post-web" in that the concepts in general use today were developed after the initial web was launched]. Yes, originally "The Web" was the world wide web, HTML+HTTP. But very quickly things became intermixed. Today, you'll click on a link that is "ftp://" to download a file. FTP dates back to the pre-WWW days. Is clicking on ftp:// *not* the web too? If people click on a text file obtained through FTP and read it on their browser, is that not the web? Or on a quicktime video?

Today we often use HTTP to download files, not to see HTML, or to stream video, or audio, or Flash animations, or, yes, RSS, and it's an integral part of the "web experience". Some clients take HTML and present it in different ways "syndicating" it. So what I'm saying is that, even though at the beginning the web was indeed just HTTP+HTML, it went beyond that very quickly. Even if you only consider what you see within the browser the "web experience" includes a multitude of formats and protocols.

Purely in terms of content "weight" (storage capacity + bandwidth) I'd bet that web protocols are carrying as much, if not more, content types that are not text/html. As an example of what I mean, consider that downloading the full page for Russ's post on the mobile web clocks at 35,703 bytes--34 KB or so, and it's a typical post, image, a decent amount of text, some comments. Now, about a month ago Russ was talking about that 25 GB that he saw of downloads from an MP3 he posted (size 5,649,826 bytes, or about 5.5 MB).

That single MP3 amounts to about 160 posts.

Russ generally posts once a day. So that MP3 equaled his production for half a year in terms of size (the bandwidth ratio is probably a bit lower than 1:1 though, since the MP3 doesn't get hits from search). Since Russ has also been doing audioblogging, there are several MP3s posted in his blog, which I'd wager amount for quite a lot more than all of the text/html content he's ever produced (even if you count images, CSS, etc). Then, elsewhere, there's Flash, Java, video, and everything else.

So, I think the assumption that most content is text/html is wrong. I don't see this as a big problem for the discussion that follows though, because mobile browsers will eventually support most if not all of the advanced features.

To mobile or not to mobile

Russ's next point is that delivering a "dumbed down" version of the web (or using some other kind of content delivery system) is wrong because a) devices are appropriate and b) people want "the whole web". I agree with the second, but not the first (not for any use-case+context at least, more on that in a bit).

Paradoxically, the title of Russ's post undermines his argument a bit. If you have to use an adjective for something, then it's different right? Russ's vision is not a "mobile web" but "the web on mobiles". After all, we didn't start calling the web "the video web" or "the audio web" when streaming arrived--it was just the web. This is just semantics though. :)

Russ then goes through the common arguments against browsing on mobiles. These arguments generally fall into one of two categories:

  • Capabilities. These include: "The screen is too small," "Mobile phone browsers are too limited (no Javascript, no frames, etc.)," "Other platforms (Flash, Java, Brew) provide more richer experience (with less latency)," "Mobile data connectivity - even using 3G networks - has massive latency between pages."
  • Usage. "People use their mobile devices "differently" - thus need snippets of data." "Why use a mobile phone when you're not more than 10 minutes away from a PC ever," "Prefer laptops and WiFi, rather than struggling with a mobile."
To this list I'd add "Navigation" in the Capabilities section. I think the small screen is less a factor than the fact that with most phones you have to use the keypad/tiny-joystick for input. The Web, however, has been designed with keyboard/mouse in mind. I think solving the navigation issue is more important (and more difficult) than worrying about the latency for example. People browsed the web with 14.4kpbs modems for quite a while, after all.

The capabilities problems might be a factor at the moment but clearly will not go on forever. If there's a need, they'll be fixed (eventually). Moore's Law and all that. So I don't see those things as a major problem.

Usage is to me the most important category, but it's generally overlooked. Capabilities matter, but assuming you fix most of them (it will be hard to get around the screen-size problem for a few years though--but even so I don't think it's a major roadblock), usage is really where the crux of the matter is, and one that can easily get muddled. When Russ mentions "People use their mobile devices "differently" - thus need snippets of data," he's no doubt writing down something he's heard many times. And while the first part is hard to argue with (People definitely use their mobile devices differently), the second part doesn't follow from it. At all. Blackberry users handle quite a lot of email. Many users enjoy streaming on phones, or online games. Those hardly count as "snippets" of data. However makes that statement is pushing forward their own assumptions about phones: "People use mobiles differently, and mobiles are small and puny, hence you need snippets of data."

But the use cases are different and it's worthwhile to spend a bit of time on that topic, because I think that's a big part of what's actually being discussed, though not directly, when someone says "I don't want the web on a mobile."

It's all about the use case and the context

Use-cases and context are what drive usage, not product features (the "capabilities" from above). Product features can enable new use cases, but given a certain base of devices (i.e., given today's technology, maybe looking a few months into the future at most), it's the use-cases+context that matter most IMO.

Consider the following use-cases+context:

  1. If I'm sitting at a PC with a broadband connection, it's almost unthinkable that I'd use a phone for browsing the web. In fact, if I'm sitting at a PC, it's almost unthinkable that I'd use any other device. And why would I? Everything is right there.
  2. I'm on the road, and I have a laptop that is turned on with WiFi running, and a smartphone. I'd use the laptop. Cheaper, faster, better UI, etc.
  3. Same case as before: laptop and smartphone. Only this time the laptop is turned off. The phone is on standby. How long does it take to get everything up and running on the laptop? Maybe 5 minutes? The phone, though, being always-on, is nearly instantaneous. So if I just need to check something (say, the weather) the phone is probably a better choice. But if I'm going to be online for a while, I'll probably use the laptop anyway.
  4. If I have nothing but a phone, that's clearly what I'd use. But would I use it just like a PC? Doubtful. A phone doesn't go well with "multitasking", e.g., I can't easily browse the web and talk, and check my calendar, and check a relevant email at the same time, and I've often found myself doing all of those things while in the middle of a conversation (real-world, phone, or VoIP). Phones are more of an immersive experience (speaking of phones being immersive, check this out :)).

My point is that what we think "people" will do or won't do is heavily influenced by our own experience and usage. I suspect that phones will find entrenched niches at first, things where the availability, mobility, and form factor takes precedence (similar to how game consoles have their place against ever-more-powerful PCs). If I'm carrying a laptop around most of the time, then it's unlikely that I'll see the phone as more than a stopgap measure. If, however, I only carry a phone around most of the time, the phone will gain importance.

Do I want the whole web on a phone? Absolutely.

Will it eventually become a much smoother experience? Certainly.

Does that mean that I'll (that is, me personally) stop using PC and PC-like devices, and use phones as my primary method of browsing? Not a chance.

But does that mean that nobody is going to use the phone as their primary interface to the web? Of course not.

For some people though, the use-case, or the context, or both, will be there, and maybe what makes the discussion more complex is that "some people" here means tens of millions of users. As I understand it, in Japan, for example, people use their phones to access online services more than their PCs, or at least nearly as much. Phones will grow in importance, and take their rightful place in the continuum of capabilities we have today--browsing the whole web, yes, but not necessarily pushing PCs out of the way because of that.

Categories: technology
Posted by diego on November 17, 2004 at 9:12 AM

the true story of audion

[via Frank]: The True Story of Audion. Quote:

As the kids say, upon seeing some awesome frags and/or gibs: OMFG.
Must read.

Categories: technology
Posted by diego on November 16, 2004 at 1:58 PM

the new sony ultralight pc

tinyvaio.jpg
News.com wonders if the new Sony ultralight PC (6x4 inches, 1.2 lb, 512 MB RAM, 20 gig drive) will take on the iPod. This is a ridiculous idea for a number of reasons, starting with the $2000 price tag for the same storage capacity you get on an iPod for about one-tenth of the price (or less, considering that probably 25% of the drive goes to Windows and various apps).

What I'd be wondering, instead, is if this will be finally the ultraportable that cracks the US market, or if this is the first of many "webtop" devices people use around the house, a kind of portable display. Ebook-reading, web browsing, quick note-taking, tasks, and email would be good tasks for this machine.

Regardless, it looks fantastic doesn't it?

Categories: technology
Posted by diego on November 16, 2004 at 12:56 PM

today's reading

The Internet as a Complex System (PDF) by Kihong Park, Chapter 1 of The Internet as a Large-Scale Complex from Oxford University Press, and Anda's Game a short story by Cory Doctorow. Both highly recommended :).

Categories: technology
Posted by diego on November 15, 2004 at 3:59 PM

new design time?

A few days ago Dylan changed the design for his blog, now Russ has changed it as well. For the last couple of weeks, whenever I want to relax (or use CSS infuriation as a distraction, depending on how you look at it) I've been playing with a new design based on a newspaper-like view, but it hasn't convinced me. I guess I'll add it as an alternate stylesheet and FireFox-enlightened users can switch it using the little gizmo that appears at the bottom-right of the window when a page has alternate stylesheets (have you noticed that one?). Maybe this weekend...

One thing though: new blog designs are always reinvigorating for some reason, even if many readers don't see them, courtesy of syndication. :)

Categories: technology
Posted by diego on November 11, 2004 at 6:36 PM

content, sharing, and user interfaces

A couple of days ago Russ posted an interesting entry (long, but worth the time) on what he dubbed 'communicontent':

Communicontent to me, is a byproduct of communication where traditional content is magically created. As a corollary, the forms of communication that can best be expressed as content almost naturally become communicontent. See this weblog? This is communicontent. I used to drive my friends on mailing lists crazy by writing all these long, in-depth emails. Now I just write all the same thoughts in my weblog instead. The only difference is that the viewers aren't restricted. I'm still just communicating my personal thoughts. It's communication, but because it's been captured in a fixed state to be found later, it's also content.

This is more than just the famous "user generated content." If I take a picture (content I've generated) it doesn't really matter until I decide I want to send that picture to someone. Then it becomes something different. The act of communicating that piece of content makes it more special. In practical terms, it simply adds more meta-data at the very minimum: a title, a description, a place, etc. But it also gives it an inherent value as well: I think this is important enough to send, therefore you may want to think it's important enough to take time to look at.

In general I agree that content that is communicated becomes a different sort of beast (The Google-Gmail analogy he mentionts at one point is stretching it a bit IMO). There are a couple of things I'd add, particularly what I think adds to the success of this type of shared content.

First, is that content relevance (and quality) matters, a lot. Most content people generate has relevance for themselves and a small group, even when we blog we sometimes (or maybe most of the times :)) we post about things a lot of people simply do not find interesting. Quality has a lot to do with the kind of information you're sharing, and with the kind of device/interface you use to create it. For example, there is no way someone can write a well-thought-out argument on anything using T9 on a, say, Nokia 3650. Why? Because the interface gets in the way. Similarly, you might be able to post high-resolution pictures from your PC, but not from most phones (camera quality... network speed... ability to crop/edit if necessary).

Second, as Russ notes:

In order to create communicontent, pure content needs meta-data, and pure communication needs organization.
Consider this and what I said in the previous paragraph, it brings back my recent thoughts on metadata. That is, the ability to create metadata or organization is worthless if there aren't also good ways of navigating that metadata, and viceversa. Both ends have to be covered. FOAF has, in my view, suffered from this. There's no way for non-geeks to make use of all that metadata, and conversely they don't have easy-as-pie ways to create it, which results in limited appreciation of it by non-geeks.

Putting this two thoughts together, what I'd add to Russ's ideas is that the process (which includes generation and access) by which this shared content is created matters a great deal, as does the follow-up access. Both ends of the equation have to be covered, that is:

  • The content (and if possible its accompanying metadata) has to be extremely easy to create and share
  • Once content is created, the content access interface has to be adequate for its purpose
I think moblogs work because it's easy to take a picture, then (relatively) easy to post them, and then the software on the server does the rest for you (organize them according to time, create a slideshow, etc), which covers both ends--and it's when both of these conditions are met that apps cross the boundary from the cool to the useful.

Categories: technology
Posted by diego on November 10, 2004 at 3:43 PM

the switch from berkeley db over to mysql

One of the objectives of my recent server switch was to move from Berkeley DB to MySQL (aside from an expected performance improvement, I was tired of seeing plug-ins for MT that I couldn't run-- example). Plus I feel more comfortable with MySQL. I followed the instructions for this in the movable type documentation and aside from some glitches during conversion (weird messages such as "WARNING: Use of uninitialized value in string eq at lib/MT/Util.pm line 754.") and one timeout (which forced me to restart the process after deleting the old tables) everything went fine.

One thing to note though, which the documentation doesn't make completely clear: when you run the mt-db2sql.cgi script, you must leave both the previous BerkeleyDB pointer as well as the configuration for MySQL. Once the conversion is done, you can comment out the BerkeleyDB location line. During the process, I also renamed the trackback and comment scripts, to avoid overlaps.

Now all seems to be running fine, and simple tests seem to show that some things are a bit faster (example: posting takes about 20 seconds, as opposed to 30 seconds before).

One more thing I can cross off the list. :)

Categories: technology
Posted by diego on November 6, 2004 at 3:01 PM

it's all about simplicity

The latest Economist's quarterly survey on IT is more high level than usual, but quite good anyway. Quote: "The Next Big Thing is not a thing at all: it is simplicity." Also: an article on Complexity (suscription possibly required for the links).

Categories: technology
Posted by diego on October 30, 2004 at 2:28 PM

the return of the spammer

So today, as a background task, I did a full rebuild of the site, so that the new MT scripts were properly referenced in the archive pages and so on. And lo and behold, it only took about an hour for comment spammers to show up again. Except this time it took only three or four clicks to select all the spam they had already posted, delete them, and ban their IP address. I'm glad I switched over to MT 3.1 or I'd be pulling my hair out again already.

Now to figure out how much bandwidth those invalid posts are costing and how to stop them if possible...

Categories: technology
Posted by diego on October 28, 2004 at 3:14 PM

the only way to beat an ipod... is to be an ipod

Reading the news of Samsung's announcement of the first 5-megapixel cameraphone brought back some questions I had over media in current smartphones. I've had a SonyEricsson p900 for more than two months now, and this I can say with certainty: media center, this phone ain't (yet).

Considering, then, that the P900 is most definitely at the high-end of smartphones, it would be reasonable to say (even accounting for the differences) that in terms of media storage, display, and management, smartphones in general aren't there yet either.

Why, oh, why, do I make a statement that will surely enrage smartphone fans? :-)

Okay, I'll correct the statement a little bit. Against dedicated devices, smartphones still have a ways to go. As ocassional-use devices they are excellent, and hopefully they'll grow to more than that in some areas, but for day-to-day use they are still lagging behind dedicated devices.

Some of my reasons:

  • Poor data sync: Most smartphones today have poor data synchronization mechanisms: cumbersome, slow, erratic. Until switching, say, one playlist for another in a phone is a one-click affair (or until phones get an iPod's memory) they won't be able to out-play the pure music/media players. P900 synchronization, for example, uses serial-bus speeds--you don't want to know how long it takes to upload a single MP3 song, let alone an entire album. Bluetooth connectivity in particular is quite the random affair, particularly phone-to-PC.
  • Speed/Navigation/Resolution: Scrolling through pictures on smartphones is slow. Scrolling through songs is slow. Switching functions is slow. Camera resolution is generally crappy. Audio/Video quality suffers. Result: they are good for ocassional use, rather than "first-device" use. Speed and resolution, however, are easier (and upcoming) fixes than navigation (i.e., usability), as the release of the above-mentioned Samsung phone demonstrates.
  • Poor built-in media players: this really applies to music more than anything else, although I still enjoy more flipping through pictures on my digital camera than on my P900. Or, for example, switching through songs on the P900's media player without looking at the screen is an invitation to disaster (although I really like the scroll-wheel) while any respectable media players can be used "blind" without a problem.
  • Stability: The final straw. I can't tell you how many times my P900 crashed while switching between the camera and other apps (for example, when a phone call arrived and it was in camera mode). While other phones might be more stable in terms of crashes, many of the current phones have issues with memory that requires restarting the phone or getting a memory management application, or both.
So. I think that big strides have been taken recently, and no doubt more will be taken in the future. For the moment, the only way to beat an ipod... is to be an ipod. :)

Categories: technology
Posted by diego on October 26, 2004 at 7:24 PM

(possibly) made-up proverbs for interesting times

Reading this post over at Jeremy's weblog I found myself nodding profusely, and then I noted he was pointing to this page that talks about the famous (for me and many others at least) Chinese curse (or proverb) "may you live in interesting times". It is possible that it's not a Chinese proverb at all. (Here is some more information).

As it happens, testerday I was thinking about the various tidbits of "knowledge" that I have stored up in my head and that I can't trace a source for (example: "Whales can use an ultra-low frequency signal that has world-wide reach, but by now it's likely that the interference caused by ships all around the world precludes them from doing so"). First I wondered about the social mechanisms that allow these memes (whether true, partially-true, or false) to spread and take hold (e.g., do they adjust to widely shared preconceptions?). Then what I was trying to do was assign them to some sort of "unchecked" mental category that would later imply a proper reference lookup before using them (or whenever I felt like it).

Of course, right after thinking about that, I remembered the following piece of dialogue from Annie Hall between Alvy Singer (Woody Allen) and Annie Hall (Diane Keaton):

Alvy: I've got to see a picture exactly from the start to the finish, 'cause ... 'cause I'm anal.
Annie: Ha! That's a polite word for what you are.
Anyway.

Now it seems I should add "may you live in interesting times" to that list as well.

Ain't that something.

Categories: technology
Posted by diego on October 25, 2004 at 3:17 PM

what the bubble got right

[via Sam] Another link that has, quite possibly made its rounds: Paul Graham, What the Bubble got right. A good read.

Categories: technology
Posted by diego on October 25, 2004 at 2:17 PM

rfc 822 dates in movable type

Yesterday, when I migrated to MT 3.12 I remembered that I was using a plugin to generate RFC 822 dates which MT didn't seem to be able to do. I had arrived at this solution a year ago (damn! a year ago? What the... anyway) because the basic $MTBlogTimezone$ in MT generated timezones in the format +HH:MM, while RFC 822 generally requires timezone abbreviations (e.g., GMT, PST, etc). But yesterday, while thinking about reinstalling the plugin, I rechecked the MT docs for tags and noticed that since MT 2.5 there has been a no_colon attribute for $MTBlogTimezone$ which removes the colon and thus matches one of the possible ways of specifying a timezone in RFC 822 dates, as +HHMM. The result is:

<$MTEntryDate format="%a, %d %b %Y %H:%M:%S"$> <$MTBlogTimezone no_colon="1"$>
which gets you RFC 822 compliant dates on your RSS 2.0 feeds without requiring any plugins. With this knowledge, looking specifically for these terms tells me that this is a well-known solution, but maybe it's not as clear as it should be, it certainly wasn't obvious to me when I looked for this a year ago.

Now, this solution applies to MT from 2.51 and up. MT 3x introduced a specific RFC 822 format (basically a shorter way of specifying the tags shown above), used as follows:

<$MTEntryDate format_name="rfc822"$>
Which makes the whole thing a lot easier, and which is included by default in the MT 3 templates. Cool.

Categories: technology
Posted by diego on October 24, 2004 at 9:00 PM

all systems go

Phew!

I'd say that the process is basically finished. I'm sure that things will keep popping up over the next few days, but most things seem to be working.

The last thing I did today was upgrade to movabletype 3.12. The new comment moderation, as well as other comment management features, and dynamic pages made it a good option for me. There was a little snafu when upgrading the database but it was a combination of me reading the docs wrong and the mt-upgrade script telling me that everything was fine when it clearly wasn't (the DB was getting trashed). A short (and suprisingly quick) exchange with MT support made me realize my mistake (I had do two format upgrades in a row, first to 30 and then to 31) but aside from the obvious thought that the upgrade script should be a bit smarter about this (obviously it has to be a common occurrence) it was a pretty smooth experience.

I made some measurements on the old machine: posting an entry (i.e., individual rebuild, plus related indexes) was taking 3:30 minutes (yes, that's three and a half minutes). Unbelievable how we get used to things and we stop noticing them. With the new machine it's taking about 30 seconds. I would expect (hope?) that after the MT 3.12 upgrade this will be even faster. We'll see.

So--to test the new new thing I'm reopening comments in this entry and re-enabling them on the weblog, with comment moderation turned on. Let's see how long this lasts! :)

Categories: technology
Posted by diego on October 23, 2004 at 10:51 PM

If you're reading this...

...then it means that you're accessing my weblog on my new server. The DNS switch will take a couple of days to propagate fully, but it's my experience that almost everybody sees the changes within 12-18 hours.

Finally! I started a few hours ago and by now it's mostly done. Had a few interesting experiences which I'll talk about more later. I still haven't finished moving everything over, since Eircom (my home's ISP) still hasn't updated its DNS tables (I'm posting this through an alias, and not everything works).

After the transfer is done, it will take me a bit to verify that all is well and backups are in place (just in case...) but everything should be back to normal by early next week.

Finally: if I haven't replied to an email you sent recently, wait a bit more (or send it again). The migration implies changing email systems, and that will take a bit to stabilize as well.

ps: I'm still a bit sick: my left ear feels clogged ocassionally, which is extremely disorienting and quite a pain, and I'm generally pretty congested, but definitely feeling better. Funny how easy we forget how good it is to feel normal.

Categories: technology
Posted by diego on October 22, 2004 at 7:00 PM

google desktop search: not yet for me

This is one of those things that has already burned its way through the blogsphere, but anyway...

One of the few things I did (was able to do) yesterday was install the recently-released Google Desktop Search.

Why? Two reasons.

Number one, I wanted to try the latest new new thing.

Number two, I wanted to find a particular document based on a particular set of keywords, and I was hoping GDS would make that easier. I could wait for a few hours for it to index my hard drive.

The download was quick. The installation was a snap. It apparently integrated with Firefox automatically (it wanted to close it before installing), but that wasn't mentioned anywhere. Whatever. Fine. I could see that it was a personal web server. Good solution, nice and seamless integration with Google web.

But I couldn't use it yet, because then there was the wait for Google to index some 60,000 "documents" it eventually identified in my machine. I honestly don't know how long it took--I left it running all day, and when coming back in the evening it was done, so 6-7 hours max.

Before searching, I tried closing it to see what would happen. It complained that if I closed it, any new files created or viewed (web) during that time would not be indexed at all. "Really?" I thought. "That seems kind of harsh. Anyway...". I didn't close it.

Finally I was ready. Double click on the taskbar icon. Search comes up. Type in keywords.

Scan the results.

Garbage. Images, mixed in with documents. Some things given precedence over others because, apparently, the keywords where in the directory name (I'm not sure though). A wave of dissapointment.

Hm. Of course that makes sense, you know, without hyperlinks to provide rank, it becomes more difficult to find what you want.

Okay, so I started trying more accurate keyword sequences. Different combinations.

Time after time, garbage. After a while, I started getting confused at the data I thought I knew. Hm.

I had to reply to email, etc. so I kept working on other things. I had to do web searches, which now, suddenly showed up with a first result that pointed to my own hard drive search results for that term. I mistakenly clicked on them. I did this a couple of times without looking--it took me a while to figure out what the hell was happening, and every time I ended up being left in the mess of my local result list.

It didn't take long before I simply changed my default Firefox search engine to A9.

Early this morning I uninstalled GDS. But guess what. I just realized that A9 is still my default search engine, and I'm starting to get used to the features I was mentioning the other day. Plus, I was browsing through Amazon and I discovered that using A9 gives you a small discount when ordering. Hm. Suddenly I might switch over to A9. I'm still not sure. I've switched back and forth before, but A9 doesn't seem to be as annoying as Yahoo! was with its ads.

So I started off yesterday as a Google user trying Desktop Search, and a day later I'm neither a user of GDS or of the web search? I thought: What the hell...?

I ordered the neuron to do some thinking on the subject and after a short argument, this is what I got to.

Clearly, this is, in part, because of the data I've got. I have many "subtopics" that I work on which contain references, ideas, texts, drafts... it becomes difficult to separate the tree from the forest (or whatever). But other search tools I've used on my data, such as Enfish, have never presented me with the seemingly chaotic results that GDS was showing.

So my neuron came to the conclusion that what was happening was that GDS was too adept at finding information, but, unable to discern proper ordering, it was actually making things worse. Ok, Google isn't God. We all make mistakes. No problem.

But why did I leave Google Web so quickly, and worse, almost without realizing it?

It seems to me that the answer is simple: integration. Google has gone to great lengths to make the local search experience be a seamless continuum with the web search experience, and viceversa. On principle, it's a great idea. However, I trust Google Web to deliver good results. When GDS provides the same experience, I expect the same results. But that doesn't happen (and it's doubtful that it ever can). So suddenly I don't trust Google Search, the experience in general. The web, which is now integrated with desktop results, gets dragged down into the mud by the crappiness of the desktop search results.

Consequence: Google loses a user for GDS. And then, because of the seamless integration, for all its properties. Even if that's not the case, the web results are now tarnished by sharing being equated with the local search results.

It is important to note that this is how I interpret my own actions since yesterday, things I did more or less "subconsciously". I'm not saying this would be a conscious thought process...

I also note that the GDS design integrates with the web design in a way the previous web design might not have been able to do--not cleanly at least. (Am I wrong?). This would prove that GDS has been on the works for some time---my point being that those journalists that say that GDS was "rushed" are wrong.

Okay.

Keep in mind my only functional neuron is heavily congested, so it's possible I'm missing something, but I think it may be a mistake on Google's part to take the fight to obviously and directly to the desktop. That said, everything Google does is so scrutinized that it's probably impossible to make this a "low key" release.

The desktop is Microsoft's turf. The Web is Google's. I was kind of hoping that Google would behave differently than others in the past and just keep on moving into other markets, rather than retreat (at least partially) to fight into Microsoft's. GDS, together with Picasa (and the ever-present rumors of a Google Browser), seem to indicate that it's going to be a mixed thing at best, with Google leaning on its web side to pull in desktop functionality and users with integrated "seamless navigation" features (e.g., Picasa is to Google Image search what GDS is to Google Web Search, and so on).

Anyway. It was an interesting experience. Given that it was a smooth install/uninstall cycle, I'm certainly willing to give it another try when a new release is out.

PS: My experience was similar in some respects to Don's, but it never got that bad. He noted: "I don't enjoy writing code inside a jet engine". LOL. Also, Jon discusses Firefox integration and points to alternatives. Dave makes a good point about user formats (since one would expect that other major formats, such as OpenOffice, would eventually be supported), and exposing an API (which Jon would points out could be made by "reflecting" content in a format the indexer understands, in this case XHTML). Kevin ponders a lucene-based answer, and Om compares it to Blinkx as well as pointing to other reviews.

Categories: technology
Posted by diego on October 20, 2004 at 2:30 PM

GooglME

Another cool piece of software that was released while I was away is Erik's GooglME, which he updated to 0.2 beta last week, is a cool J2ME app that provides a front end to Google's SMS service.

The MIDP 2.0 version installs and launches successfully on my P900 but it throws an error--to be expected since the P900 isn't yet on the list of tested devices. (btw, If you have tried the app on another MIDP-enabled phone, let Erik know the results as he notes here).

Categories: technology
Posted by diego on October 20, 2004 at 11:57 AM

adblock (and other Firefox extensions)

A must-have Firefox extension: adblock. Not only improves reading content online -- it also reduces bandwidth usage. I use it with sites that I pay a subscription for and that still insist on bothering me with ads, or when the ads have become so large that they cover a quarter of a browser's window (otherwise, I allow ads, since it's clear that it's how a site pays for its bills, something we all need to do).

Other Firefox extensions I use regularly: EditCSS, JSLib, Venkman and WebDeveloper.

Update: Luke notes, via email, that adblock does not save bandwidth--it only hides or removes the elements from the display layer. Good point, and thanks for the correction! I wonder how hard it would be to add that feature... it would probably still require downloading in some cases, to be able to calculate the width and/or height of what you're not including (if the size is not available in the tag) to maintain the layout...

Categories: technology
Posted by diego on October 20, 2004 at 9:30 AM

just. blog. it.

A couple of days ago Dylan released JustBlogIt, a Firefox extension to allow right-click posting to a weblog. It's excellent (yep, using it right now). The only thing missing is the swoosh!

ps: it's also missing the ability to show the "preview" button in movabletype. But that's a minor detail and probably a movabletype issue more than anything.

Categories: technology
Posted by diego on October 19, 2004 at 8:33 PM

more than just words and hyperlinks

Sometimes a post, succint or not, leads in so many interesting directions that it deserves a category on its own for thought-provoking ability alone--the post, the info to which it links, sets off a storm of ideas in my head that takes a while to get under control. Today's examples are Dare's Why I love XML-DEV and Scott's the spirit of startups past. Most excellent.

Categories: technology
Posted by diego on October 2, 2004 at 10:27 PM

a9

Amazon's A9 search engine has been adding some intriguing new features. "I like, I like!" :)

I've been actually musing (in my head) about personalization and search during the last few weeks. This adds some more datapoints. When the ideas are organized to a minimal degree, I'll write something up.

Categories: technology
Posted by diego on October 2, 2004 at 10:19 PM

craigslist

[via Karlin]: craigslist arrives in Dublin. Cool!

Categories: technology
Posted by diego on October 2, 2004 at 10:15 PM

per-coffeemug is better

Don is thinking about filing "per-chair, per-desk, and per-floormat software pricing methods" in response to Sun's per-employee software pricing patent.

Don, patents are serious stuff, and you're foolish in not taking them seriously. I wish you would. The growth of patents such as these should be not just accepted but applauded. I am thus compelled to reply. If my language appears overly circuitous, it is only because I avoid vitriol through an overextended misuse of formalisms. Which also excuses this redundant explanation.

I was saying.

Patents provide "glory" as the News.com article says, and this is for a good and simple reason: patents are the work of Heroes. Paraphrasing attorney-at-law Lionel Hutz: "I don't use the word Hero very often. But patent applicants are among some of the greatest American heroes of all time."

Let me, Don, also point out some of the problems in your childish proposal.

First, floormats are not common in software development environments.

Carpets, maybe. But not floormats.

And something else: what is up with static electricity and carpets? It hurts when I touch metallic stuff and the spark or whatever goes off. I wish that would stop. Can't we invent a system to make electricity only come from some kind of socket on walls or something?

But I digress.

Second point: many programmers share desks. How are you, Don, going to get around that? How about when a chair breaks?

Third, I am already filing per-coffeemug patents which will box in youir per-chair and per-floormat patents in no time.

Per-coffeemug is also superior to per-employee, which is login based. Users are very protective of their coffee mugs, more than they are of their login/password combinations.

I will soon have the world-wide market on software pricing patents cornered. I will then sell the license at a fair price (reference for that can be the price of a box of Windows XP for example) and use the proceeds to purchase a coconut farm on an island in the Caribbean (preferably one that comes with monkey-butlers).

PS: I am also patenting this blog entry. Just in case.

Categories: technology
Posted by diego on October 1, 2004 at 2:58 PM

comments?

Related to my comment spam problems (and the drastic solution I "implemented" of simply disabling the script), Toni recently pointed specifically to scode. I looked at something like this (and it led me to consider something similar but with a random text string instead of an image, and in a random hidden text field included in the form), the problem with an image-based solution is that it increases load on the server (every static page calls a script) and I don't want to do that until I switch servers (any day now! :)) since this server is edge already.

Then again I haven't looked for other solutions since late August. I need to get this done eventually -- at this point I am probably just being lazy. Or "spring feverish". :)

Categories: technology
Posted by diego on October 1, 2004 at 1:45 PM

airborne madness, two years later.

This post I wrote on September 2002 could have been written two days rather than two years ago. I can't easily describe how ridiculous I find the restrictions companies place on electronic devices because they might "interfere with navigational systems" and such. Every time I travel there seems to be a new device or system that is "dangerous." Example from my recent footrip to the Netherlands was that you couldn't use any "laser-powered device, such as a CD player." Nevermind that a CD player is not "laser-powered", you could use laptops, which generally have CD or DVD players. But laptops are OK.

Right.

Cellphones have long been no-no at any time during flight (only recently some airlines have started to let you use them with the doors open at the gate, others still don't, even though they use the same airplanes). Now, in a couple of years there will probably be cellphone service on airplanes. Aha. Having a cell on the airplane and everyone using their phones, doesn't interfere. Giving you WiFi in the plane is also A-OK. Turning on a phone on its own, however, is quite dangerous. I get it.

Anyway. I wonder how they will spin the fact that WiFi & cell service on planes creates way more interference that a single device ever would.

Categories: technology
Posted by diego on September 30, 2004 at 11:10 AM

firefox's live bookmarks

Don delves into a feature in FireFox 1.0: Live Bookmarks. I only installed FireFox 1.0PR a couple of days ago, in the process of doing a clean install on my notebook (by now my various machines have various versions of FireFox, the oldest of which is 0.7), and I noticed the Live Bookmarks but didn't pay much attention to them: I didn't find the UI very inviting (I read too many blogs to make it practical to see headlines there, and linear menus were NOT designed to hold that many items and be usable in any way, shape or form). From what Don describes the implementation would limit their usefulness as well. But I've noticed other "sudden" UI changes as well, which made me think that maybe the FireFox team was coming down with a case of featuritis. Hopefully that won't happen.

Categories: technology
Posted by diego on September 17, 2004 at 9:54 PM

holy service pack batman!

Yeah, well, I still can't get that ridiculous image of Batman @ Buckingham out of my head, so I'm writing it in somehow.

A few days ago I installed Service Pack 2 on one of my PCs, the idea being that I could see whether it would interfere with share, etc. The actual installation went allright (it took a while to download & install though).

The first boot after the install was finished I was asked to configure the "security settings". Since I rely on an external firewall and have all the dangerous stuff disabled anyway, I left this disabled as well. After that the login appeared.

I logged in. And waited. And waited. Icons drew themselves painstakingly across the screen, as if the millions of tiny drunken leprechauns that are responsible for the behavior of Windows were suddenly more inhebriated than usual or maybe were away on a security seminar and thus unavailable to paint icons or do their usual tasks.

Eventually the leprechauns decided to get to work, but slowly. So I rebooted, hoping that things would get fixed that way. No luck. The second time, login took even longer.

I was sort of resigned to things not really working anymore, and thinking how I was going to uninstall this piece of crap, when a dialog box distracted me.

It was from one of the McAfee applications.

Now, backtracking for a moment. I purchased McAfee VirusScan a few weeks ago when my Norton License expired. I had fond memories of VirusScan from many years ago, when it was a simple program that included nice command line tools. Norton AV has become a monster in recent years, installing all sorts of background services that do NOTHING useful except take up resources and interrupt whenever it's most inconvenient. (Note: these services usually are somewhat useful when you operate in an all-MS environment; i.e., Office, Outlook, Messenger, etc. and you have no idea of what you're doing). Be that as it may, I had gotten tired of Norton's heavyhanded approach and decided to try something else, so I remembered VirusScan.

To be honest, I was a bit weary. McAfee had been through a number of corporate acquisitions and mergers and "refocused" on the corporate market (which is usually code for "we will now be able to push garbage down our customer's throats, only now 20,000 seats at a time). But I really had good memories of VirusScan and thought, "really, how much could they have screwed it up?".

As it turns out, the answer is "quite a lot". VirusScan 8 is one of the worst pieces of software I have ever seen. Difficult to register for. Difficult to install. Difficult to set up. Slow. In short, do not buy VirusScan. Norton is much better (if I find a simple product that just scans for viruses and doesn't try to set up my PC as if it was NORAD, I'll let you know :)).

And, most of all, VirusScan is intrusive as hell. You wouldn't believe how many times I've been interrupted in the middle of typing by the stupid VirusScan notification window telling me that "it has downloaded an update" and asking if I wanted to "continue with what I was doing."

I know that my topic was supposed to be XP SP2, not VirusScan, and that this appears to be too hyperbolic even for me, but I am getting somewhere.

About a week before I installed SP2, one of the VirusScan updates installed something called the "McAfee Security Center", which is basically a fancy control window (which is probably ActiveX-based) that tells me that my security in my computer sucks because I don't have any McAfee products installed. I am completely serious about this: McAfee is telling me that my computer is not secure simply because I don't have their software (i.e., their firewall, antispam, etc) installed--irrespective of whether I have other software or other solutions installed. When this security I disabled all the automatic services except the Virus definition update and promptly forgot about it (naturally, the thing kept updating itself, but it wasn't too bad).

Okay, so after SP2 is installed, and the second time I reboot, while I am wondering how to get rid of SP2, the following dialog box pops up:

mcafee1.png

Since I am suprised by the dialog box, I read it carefully. And even though I read it carefully, I am still not sure of the implications of this action. Keep in mind, I have just installed SP2. I haven't had time to see any of these features. As far as I know, there's nothing in Windows called "Security Center". And yet McAfee wants to replace the default with its own (which I am barely aware exists as it is).

Reasonable person that I am, I decide that no, I will not let McAfee take over the Windows Security center, since I want to see what this is, and maybe later I'll set it up like that. So I press "No" in the dialog above.

Clearly selecting "No" then triggers the following dialog:

mcafee2.png

Look at the text of the second dialog carefully.

Notice that, again, there's the options "Yes" and "No". However, because of text that can only be described as designed to deceive, the meaning of the buttons is inverted from what it was in the previous dialog. Normally, if you require double-confirmation, you'd say something like "are you really sure? Y/N". The McAfee guys, counting on the fact that you'd made up your mind and you'll probably click "No" again, simply invert the behavior of the buttons on the second dialog, so that you do what they want, no matter what.

Let's recap for a moment.

There's new security settings on SP2. Claims from Microsoft about "improved security" notwhistanding, as far as I can see the main improvement in this pack is that Microsoft is bundling all sorts of apps that until now have been third-party apps, such as adaptive personal firewalls. Additionally, they have disabled a bunch of stuff that is "dangerous", thus taking the path of "if something has a security hole, disable the feature, instead of fixing the hole." (granted, when the problem is the design, "fixing the hole" is much harder, but that's not an excuse on my book).

Anyway, it is clear that to appease third-party vendors (such as Symantec and McAfee) Microsoft has included an API of some sort of this Security Center stuff. And obviously the poor third-party companies do not want users to use the Windows defaults.

So they resort to terrible UIs and behavior like what I described here. They have to be both devious and in-your-face, so that you are not silently taken away to the MS-bundled elements over time, and you remember that it's McAfee (or whoever) that's protecting you, rather than Microsoft.

I think that most technical users will find most of this stuff completely confusing and they will either a) end up with a machine where nothing works, without knowing why, or b) end up disabling all the security.

These are several bad effects that are all clearly tied to the implementation of SP2 (and dependent to some degree on the different products that people already had). Bad UI. Confusing features. Software that's difficult to use, and crippled so that it's "secure".

I am sure that Microsoft can do better.

PS: In the end I did leave SP2 installed. After a few more reboots, the drunken leprechauns magically started to work properly again, and that was that. The only other strange thing was that Windows Messenger refused to start on reboot, and has been behaving erratically since I upgraded (when I say "erratically" I mean exactly that. Yesterday, for example, it kept showing its right-click menu no matter what I did anywhere on the screen. I had to kill it. Today, nothing's wrong. Two days ago, there were two taskbar icons for the same Window).

And yes, I have scanned for viruses, just to be sure that the source of the weirdness is not something else. :)

Categories: technology
Posted by diego on September 13, 2004 at 4:58 PM

dasher

dasher.pngLast Saturday, during a conversation with Santiago he showed me dasher running on his Linux laptop (Here's an article on it from The Economist).

It is amazing. At first it is relatively weird to use, but that feeling goes away quite quickly. And then you start "typing" away without problems (I haven't tried adding a training file to it though, the basic package worked well enough for simple testing). But you won't really know until you try it. Go check it out--they have binaries for most platforms.

If this isn't squarely part of the future of typing on mobile and input-restricted devices (aside from Speech/Handwriting recognition and gestures), then I don't know what is.

Categories: technology
Posted by diego on August 24, 2004 at 6:36 PM

live from EuroFoo

I just realized that in the madness of the last few weeks I neglected to mention that I had been invited to EuroFoo. I got here yesterday evening after a ten-hour trip (there were no delays, but since it was bus+plane+train it was nearly inevitable that it would take a bit).

I spent some 24 hours in electronic isolation, between the trip and subsequent rest. It was good.

Now there are about 50 of us gathered in what I'd say is the restaurant of the hotel that (I heard this morning) we've basically taken over. The hardware/human ratio is something to behold. Powerbooks are the majority, or nearly so.

We should "officially" start any minute now. More in a bit.

Categories: technology
Posted by diego on August 20, 2004 at 3:41 PM

'comments off for now' cont'd.

Email I got (thanks all!) regarding my post yesterday comments off for now pointed to a number of solutions, most of which I knew. I neglected to mention those yesterday, so here they go:

  • Adrian pointed me to this entry on his blog on which he describes a solution similar to what I was discussing yesterday--adding a field to the form, while what I was saying was to change the names of the fields that already exist. Combining extra fields with field name changes and script name change should be a pretty good deterrent I think.
  • To close comments after a period of time, Tima's mt-closure is good but as far as I can see it rebuilds every entry it changes the values on, which is a problem with my slow machine. Ideally, it would save the values and then perform a complete rebuild, similar to what Tima's own mt-rebuild does.
  • Jeremy's mysql-compatible autoclose comments script--there are a couple of others with similar functionality, and in any case I haven't been able to try them since I use Berkeley DB.
  • Some suggestions centered around mt-blacklist, but I think that "blind" (i.e., depending on blacklists that are widely distributed) blacklisting of any kind (IPs, terms, etc) is a bad, bad, bad idea, inelegant to the extreme, and not very effective. If any proof is necessary, it should be enough to note that blacklisting has been done for years on email. And we can all see how well it has worked...
  • Bayesian filters, such as mt-bayesian are another alternative but again they are error-prone.
When I read all this I realize it seems like a long list of complaints. Clearly these solutions all have something to offer, but the reason I haven't started using the ones that have been around for a while is simply time. While I might not be sure of which solution is best, I am sure that I don't want to install a solution that not only doesn't work all the time but that then demands more work to solve the edge cases, however few they may be, or that mixes legitimate and illegitimate comments. That is, in my view, any solution should never mark a legitimate comment as spam, while it may mark a spam comment as legitimate. Closing comments is sort of an orthogonal approach to spam detection or prevention, and I haven't used that for reasons unrelated to the script (ie., machine speed).

Okay, enough with the rant. This is all useful information for that moment at some point in the (near?) future when I will have time to deal with this properly.

Categories: technology
Posted by diego on August 17, 2004 at 5:06 PM

Internet != internet

Wired has announced that they are ditching the capital 'I' in Internet [via Dave] as well as the capital W in Web and the capital N in Net. While I don't have an opinion about the Web and Net cases, I think that the case for Internet is different.

An internet (lowercase 'i') was initially defined to be a connected set of (potentially different) networks, with the Internet (uppercase 'i') being the internet of all internets. So Internet is a particular name for a particular abstraction. True, this might be a definition that is of historical interest more than anything, or specifically pointing at the Internet construct, where Wired is looking at the Internet as a medium. A medium implies certain homogeneity, which, while true in practice in most of the Internet today, is not what the initial use of the term "internet" implied, is not true for research edge networks (and some commercial networks) connected to it, and was not true at the beginning when a protocol had not yet won over others as a standard.

In any case, I assume that in technical terms we will continue to make the distinction, since, strictly speaking, an internet is a subset of the Internet. :)

PS: Check out the various references in the History page at the Internet Society for more detail on the early terminology and naming.

Categories: technology
Posted by diego on August 17, 2004 at 4:13 PM

comments off for now

So I had planned to blog a bit 'for real' today but maybe that time is past: I spent most of my "alloted" blogging time (and then some) deleting spam comments. Twice in the last week d2r has been under what has to be described as a massive spam attack. A bot systematically going through every page (loading information with a Mozilla client ID so that it looks as if there was a read before posting a comment, and probably scanning the page for the comment ID) and then posting garbage to up the ranking of whatever crap of the day they're selling.

The way to stop it has been to simply remove, for now, the comment script as I look for a solution. I've found several, mostly directed towards mysql backends (which I don't use, maybe I should) but always when trying them something doesn't quite work. Also, I don't want to spend much time looking for a solution (probably switching to a faster server is part of that).

One thing I was thinking is that these scripts obviously have to rely on standard MT configurations to be effective. This means fields IDs, form name, things of that nature. Before, I could stop them by changing the name of the comment script, but they have (predictably, I might add) adapted to that by scanning the page before posting. But if the comment form uses names for the fields that are non-standard, as well as pointing to a different URL, I think the only way to post comments should be at the webpage itself, by a person that can recognize the form, since the elements that allow scripts to recognize them automatically wouldn't be there. I recoil at the idea of digging through the MT sources to find that, but maybe I'll do it. Certainly MT could come with a screen to configure the names for your setup, in that way every blog would have its own form names and format and it would be, I think, quite difficult to post comments automatically.

A simple concept, but I think it should work, no?

Categories: technology
Posted by diego on August 16, 2004 at 8:57 PM

the new phone

p900-1.pngSo about a week ago we got a couple of smartphones to start doing some work on them and to understand their capabilities better: a Nokia 6600, and a Sony Ericsson P900, which I got.

While I understand the technologies, capabilities, etc, and have played with them in emulators, I haven't actually owned a smartphone until now. I'll write a more detailed review later (I wanted to play with it for a reasonable amount of time first), but here are some first impressions.

As far as phone functionality, the P900 is quite good. The transition from other apps to the phone is a little awkward but manageable. It comes out of the box with miniheadphones and a microphone, which is crucial considering the phone is at first a little too big to use as a normal phone (later I got used to it though).

Then there's the connectivity: Internet and bluetooth (why oh why doesn't it come with WiFi!). Opera comes in the tools CD, which is nice, but you actually have to install it. Once everything is set up (and I'll dwell on that particular point in a later entry) it works fairly well, usable for many situations, but I don't even want to think how much money it costs to see a simple webpage. Using mobile versions of things like the Google for palm site helps a lot. Too bad it's not so easy to find sites that support that (Opera, btw, is amazing at fitting regular websites in such a tiny screen without completely destryoying the design). Bluetooth works fine, but it's much too slow for file transfers (but this is probably due to the phone rather than Bluetooth's intrinsic speed). The phone also comes with a USB cradle that doubles up as charging station, which is also good, but the transfer speed is also bad, which confirms that it's the phone that is the bottleneck.

Storage: this phone uses the braindead Memory Stick, and Sony in all its wisdom has released a rash of incompatible versions (Regular, Duo, Pro, Duo Pro...). The P900 supports Duo which is limited at 128 MB--ok for a phone but not to store media. This is quite a limitation, and unnecessary IMO.

Software: decent list of basic apps, and of course access to a lot of Symbian apps. Only once, after playing with it for a while, it complained of "low memory" even though I had very few files open. Had to be a memory leak: cue Purify for the Symbian guys. A restart cleared the problem. I also got Task Manager, which has turned out to be a critical tool for looking at running apps, closing them, etc. Don't know if there's a better solution that this.

Finally, media: built-in camera for video and stills. Quality is decent, but I won't be leaving my digital camera behind any time soon. The phone can play MP3s, but the player software that comes with it is a joke (I haven't even found a way of creating playlists), and I haven't yet looked for better software--I don't even know if there is better software. This, coupled with the slow transfers makes it hard to use the phone as your regular MP3 player, but barring any other option it does the job.

Overall, pleasantly surprised on many levels, and a little disappointed at its media handling capabilities (considering this is one of the (if not the) highest-end devices running Symbian, I imagine the situation is similar for most other smartphones. I am probably being unfair in comparing it with devices that only perform a certain function, though. Then again, I don't think it's meant to replace those devices yet.

I got all the necessary tools to write code for it, but I haven't done anything with them--too busy. That'll have to wait for the next few days, or even next week. :)

Categories: technology
Posted by diego on August 15, 2004 at 12:36 PM

feedster v2

Scott announced yesterday Feedster version 2. Excellent. It's now much faster and with several new features. Congrats to Scott & the Feedster team!

PS: by the way, I have a question: what does the version="2.0/XSS-extensible" mean in the feedster blog RSS 2.0 feed? I was recently doing a small study of feed types and out of 20,000 this is the only feed that identifies itself as that. Bug? Feature? Just wondering :).

Categories: technology
Posted by diego on July 15, 2004 at 7:38 PM

comment spam, cont'd

On my previous post on comment spam Phil made a good point, that since I generally close comment threads once spam appears (because I can't afford to be removing comments and rebuilding too often) it makes for a Godwin's Law sort-of-thing, where someone could essentially "engineer" the closing of a thread by posting spam.

I hadn't thought of that! I can say though, that generally when I get comment spam posted I also ban the originating IP, so whoever did it will have problems posting in the future (and the IP should also let me see if a spam looks iffy by being the same as someone who just posted a comment). For people with dynamic IPs (most of us?) it won't necessarily work, and someone could re-login (to their DSL, etc) purposefully to try to change IPs... I guess it's not a perfect solution, but it's good enough under the circumstances...

Categories: technology
Posted by diego on July 11, 2004 at 4:26 PM

comment spam

BTW, comment spam has been quite a problem recently, which accounts for the number of posts that have comments closed. Basically whenever a spam comment is posted I close the comments for the entry, which accounts for the randomness of the comments on/comments off posts. Normally I'd just delete them and be done with it, but Movable Type on my poor Celeron 700 server is dog slow and rebuilding an entry to remove the comment takes about a minute (I'm not exaggerating). Probably part of the problem is the fact that I am inching on to 2,000 posts and that makes MT slower (maybe) but in any case I can't be removing spam comments every five seconds, and that forces me to close the thread. Sorry about that.

Categories: technology
Posted by diego on July 9, 2004 at 1:02 PM

burnout? no, just busy

So yesterday as I'm pondering why I haven't posted anything in a few days I read this wired article on blogger's burnout. Although I've experienced lack of blogflow before, this time it was something different: just being too absorbed into what I was doing to do anything else.

So what was I doing? Simple: working on a new release of clevercactus pro. Not that share is taking a back-seat or anything, mind you, this is something that we had planned for a while and finally there was time to do it. (The release will be out sometime next week).

Anyway--I have had a couple of posts swirling in my head for a couple of days now, so I'll get to that now. :)

Categories: clevercactus, personal, technology
Posted by diego on July 9, 2004 at 12:54 PM

apology accepted

Jon Udell quoted me on a piece for infoworld but somehow my name ended up being "Diego Rivera". I sent him and email and in a few hours he replied apologizing (the change will be online at some point in the near future, but the ship has probably sailed on the print copies), and he posted a correction on his weblog.

At the risk of sounding hokey, I find it a small honor to be quoted by Jon, mistaken attribution and everything (and hey, there are worse things than being confused with Diego Rivera!)--as I've said before, I learned many things from his columns, going back to the days of Byte.

It's often been said how weblogs "push back" against media (big or small), but not a lot has been said about how media itself can use them to improve itself (or maybe I just haven't read a lot about that). Had this happened on a print-only medium, a correction would probably have taken a week or more, plus no one would see it because there's no context. Of course, Jon is ahead of the curve on this, but we can hope that feedback loops of this nature eventually become the norm rather than the exception.

So, thanks, Jon, and apology accepted. :)

Categories: technology
Posted by diego on July 3, 2004 at 8:58 PM

tracking a hoax

Wired has an article Copy This Article & Win Quick Cash! that tracks down the (by now almost ancient) "forward this email and get paid by Microsoft" hoax. The author manages to track down the originator of the hoax, who says it started out as a joke and quickly "got a little out of hand." So why hadn't he claimed 'authorship' before? He replied: "It's just a hoax. And if I admitted to it, why would anyone believe me?" --to which I can only add: Indeed.

Categories: technology
Posted by diego on June 29, 2004 at 7:15 PM

tiger--when?

No, no that Tiger. I'm talking about OS X v10.4. Reading the feature list makes me want to install it right now, but there's no information on the page of a release date (News.com has it for "early next year"). Maybe by the next WWDC?

Update: Russ and Erik have some good comments on Apple's inclusion of a concept originally developed for OS X by a small developer, Konfabulator. Am I wrong, or this also happened with another tool for the release of Jaguar or something? A Watson add-in? Funnily enough, Dave notes this New York Times article in which Jobs is quoted as saying "They're copying our concepts, [...] I'd kind of like to get credit sometime." Hm.

Categories: technology
Posted by diego on June 29, 2004 at 8:57 AM

stone hackers

Sean McGrath: The ancient art of hacking in Ireland. Heh.

Categories: technology
Posted by diego on June 20, 2004 at 6:32 PM

((no single == many) && (no single != no)) point(s) of failure

So today an outage of some sort at Akamai's distributed DNS service brought down access to some major sites from various parts of the world, including Google, Yahoo, and Microsoft. Pretty quickly, as evidenced by this slashdot thread the questions over how the days of "no single point of failure" are over started to pop up.

The myth of the Internet being so resilient that it would never fail is an interesting one. More accurately, its a set of layers of myths, that go back to the often-repeated idea that "the Internet was designed to survive a nuclear attack".

One of the crucial ideas of ARPAnet was that it would be packet-switched, rather than circuit switched. With packet-based communications, clearly the packets will attempt to reach their destination regardless of the circuit used, and there is no question that packet-based networks are much more resilient to failures than circuit-switched networks.

Let me be clear: part of my argument is semantic. That is, the fact that packet-switching means "no single point of failure" doesn't mean that there are no points of failure at all. The problem, however, is that we end up ignoring the word "point" and reading "no failure". The idea of "no single point of failure" eventually ends up implying "failure proof". Which is why we are so surprised when a systemic failure does occur.

ARPAnet, however, never qualified as a failure-proof network, and the points of failure were few enough that "no single point of failure" had little meaning. In the early days you could literally take out most of the Internet by cutting a bunch of cables in certain areas of Boston and California. With time, yes, more lines of communications where available, reducing the probability of failure even further, but even today the amount of trans-continental and intercontinental bandwidth is certainly not infinite.

But, ok. Let's concede the point that a systemic failure at the packet-switching level is of very low probability in today's Internet. What about the services?

Because it is the services that create today's Internet. And many of the services that the Internet depends on are centralized.

Take DNS. Originally, name resolution ocurred by matching names against the contents of the local hosts table (stored in /etc/hosts) and when a new host was added a new hosts table was propagated across the participating hosts. Eventually, this process became impossible, since hosts were being added too fast. This led, in the 80s, to the development of DNS, which eventually became the standard.

DNS, however, is a highly centralized system, and it was designed for a network a couple of orders of magnitude smaller than what we have today. The fact that it does work today is more a credit to sheer engineering prowess in implementation, rather than design, although the design was clearly excellent for its time.

Even today, if the root Internet clusters (those that serve the root domains) where to be seriously compromised), the Internet would last about a week until most of the cached DNS mappings expired. And then we'd all be back to typing IP numbers.

And it doesn't stop with DNS. What if Yahoo! was to go offline? What if Google vanished for a week? What if someone devised a worm that flooded, say, 70% of the world's email servers?

For users, the Internet has now become its applications and services rather than its protocols. And the applications and services leave a lot to be desired.

What's missing is a shift at the service and application level in all fields, routing, security, and so on (Spam is just the tip of the iceberg). Something that brings the higher levels of networking in line with the ideas of packet switching.

So, today, Akamai sneezes and the rest of the world gets a cold. Tomorrow, it will be someone else. This will keep happening until the high-level infrastructure we use everyday becomes decentralized itself. Only then the probability of systemic failure will be low enough. Low enough, mind you, not non-existent: Biomimetism and self-organization, after all, don't guarantee eternity. :)

Categories: soft.dev, technology
Posted by diego on June 15, 2004 at 7:41 PM

cringely on weblogs

[via Dave] The most recent Cringely article is on the topic of weblogs, and he makes a number of interesting points. The first is that

It takes society 30 years, more or less, to absorb a new information technology into daily life. It took about that long to turn movable type into books in the 15th century. Telephones were invented in the 1870s, but did not change our lives until the 1900s. Motion pictures were born in the 1890s, but became an important industry in the 1920s. Television, invented in the mid-1920s, took until the mid-1950s to bind us to our sofas. The PC and the Internet are both today about 30 years old, which means we are finally figuring what they are about.
While his numbers match, I have to say that I find the logic faulty. I don't think that it takes 30 years for people to "figure out" what something is good for (although the lag between early adopters and the public-at-large is definitely there), I think that the 30 years he points to is more a measure of the economic and cultural evolution of a particular technology and its acceptance rather than whether people "figure out" what something is good for or not. Also, Cringely is measuring using a US-centric view, when you go outside it becomes easier to see that sometimes a technology (or technology/science mix) evolves through different timescales. Take, for example, Norman Borlaug's work on "distilling" and then "exporting" dwarf wheat, which provided one of the keys to allow densely populated countries such as India to (quite literally) feed themselves. If you start counting with Mendel's work in the late 19th century to Watson & Crick's work on DNA, it becomes difficult to find 30 years that fit neatly in that timeline. Even for more "technological" achievements, such as air travel, it's difficult to find the 30 years anywhere. Generation and use of electricity is the same thing. And even for Cringely's examples, like TV, we could say that even though they became more used in the 50s in the US, it wasn't until the 60s, and Kennedy's assasination, that television's power was really apparent. On the other hand, it didn't take anyone 30 years to figure out that digital music had quite a number of advantages, as anyone within reach of a computer can attest.

While I do argue against his calendar-based absolutism and my contention that it's economic and curltural forces, rather than "figuring out" what something is or does, I think he has a point: technologies, or anything that affects our cultural behavior for that matter, do have an adoption curve measured in years and sometimes decades.

However, in using the "figuring out" imagery, he also implies that the technology necessarily is creating something new, and that, I think, is wrong. In many cases a technology is simply facilitating a process that already existed, and it's only after a while (sometimes quickly, sometimes slowly) that the facilitation of previous behavior leads to entirely new processes.

And, once in a while, in facilitating something that previously existed, a technology will simply bring that to the fore, help us rediscover something that might have been lost or pushed aside in the evolution of things.

That, I believe, is the case with weblogs.

For "proof" I refer you to another segment of Cringely's article:

Some people think this column is a web log, but it isn't. For one thing, it predates web logs and I'm hoping will post-date them, too. Google News classifies what I am doing here as a web log even though I predate Google, itself, by more than a decade and don't see my work that way at all. I use too darned many words to be a web log, for one thing, and too darned few links. If I write anything really newsworthy, which I like to think that I do from time to time, the only way Google News will show it is if one of their 4500 REAL news sites mentions me. Otherwise, I don't exist, or more properly I exist only in a blogosphere that I, in turn, refuse to acknowledge. I'm odd that way.
His oddity notwhistanding, the fact that he has been doing whatever he was doing before weblogs came along doesn't mean he's not weblogging, or doing something that shares qualities with weblogging. As I said in my intro to weblogs, some people are "natural-born bloggers". Cringely's style of writing always had a foot on the world of weblogging: personal, opinionated, timely but not necessarily timeless. Even his book, Accidental Empires, which is excellent and I've read more than once, tilts towards qualities we generally associate with weblogging--not that weblogs "invented them", it's just that weblogs share some qualities with certain styles of (dare I say it?) journalism, or, more generally, writing.

But most of all, I'd point to the fact that his column includes a picture of his newborn baby.

Now, where have we seen that before? :)

Categories: technology
Posted by diego on June 11, 2004 at 8:41 PM

comPost

Dylan has released comPost, a bookmarklet to post to a variety of weblogging tools that integrates with Internet Explorer. He used it to post the announcement, so you know it works. Rockin'!

Now, if I could only get it to work with FireFox, that would rock tenfold! :)

ps: the name is... less than appealing. I get the pun, but I think a cool tool as this one is would spread faster with a cool name: "coolPost" sounds too obvious and probably too cheesy, "anyPost"... maybe. allPost? blogPost? lightPost? Anyway, you get the idea.

Categories: technology
Posted by diego on June 10, 2004 at 7:11 PM

airtunes

Sound over WiFi, simple, and useful, as well as other things. Read more here.

First reaction when I heard about this: oh yeah. :)

Categories: technology
Posted by diego on June 8, 2004 at 6:32 PM

quote of the day

"If any person can edit a webpage, so can any robot." [source].

Categories: technology
Posted by diego on June 8, 2004 at 12:39 PM

the PDA field shrinks

Sony is exiting the PDA market in both the US and Europe, it seems. There original vision wasn't bad, it was simply too proprietary and couldn't keep up with non-proprietary solutions. On the handheld side, PDAs are not going to vanish anyway, they'll just be morph with Smartphones and devices of the sort (although I think there will always be a market, however small, for "pure" PDAs). Maybe Sony is planning on reviving some of the Clie stuff through its cellphone venture with Ericsson...

Categories: technology
Posted by diego on June 7, 2004 at 11:24 AM

solar! wind! power!

If only solar power was more commonplace, and less of a luxury. Another problem is that solar cells today still take too much energy to be manufactured and they don't produce much over their lifetimes making the situation, energy-conservation-wise, something of a wash (or worse). Yes, they aren't practical in many situations. But they are still useful--and cool.

Here in Ireland it's much easier to use wind. Here's a map of some current wind farms, and a new wind farm is being built south of Dublin. When completed, it will be the largest in the world (200 turbines, three times the generating capacity of all current offshore wind farms worldwide).

Categories: technology
Posted by diego on June 6, 2004 at 3:36 PM

not aliens--just some wireless sensors

[via Slashdot] Area 51 'hackers' dig up trouble. Quote:

[...] it turns out the truth really was out there, and the government didn't appreciate Clark digging it up.

Clark didn't find the Roswell craft or an alien autopsy room -- in fact, while officially shrouded in secrecy, the 50-year-old base is generally believed to be dedicated to the terrestrial mission of testing classified aircraft. "The U2 spy plane, the SR-71, the F-117A stealth fighter, all were flight-tested out of the Groom Lake facility," says Steven Aftergood, director of the Federation of American Scientists' Project on Government Secrecy. The myth of Area 51 memorialized in films, T.V. shows and novels is a function of the secrecy that surrounds it. "It is a concrete manifestation of official secrecy at its most intense, and that invites a mixture of paranoia and speculative fantasy that has become ingrained in popular culture," says Aftergood.

Even without aliens, the facility has its secrets, and last year while roaming the desert outside the Groom Lake base Clark stumbled upon one of them: an electronic device packed in a rugged case and buried in the dirt. Marked "U.S. Government Property," the device turned out to be a wireless transmitter, connected by an underground cable to a sensor buried nearby next to one of the unpaved roads that vein the public land surrounding the base. Together, the units act as a surveillance system, warning someone -- somewhere -- whenever a vehicle drives down that stretch of road.

Makes sense of course. I think similar technology is used around other high-security facilities, like Cheyenne Mountain (NORAD's Operation Center). Speaking of NORAD, recently I read that NORAD is a binational military organization, and its commander is appointed by (and responsible to) both the US President and the Canadian Prime Minister. Although it's the North American Aerospace Defense Command, that surprised me for some reason...

Categories: technology
Posted by diego on May 26, 2004 at 4:25 PM

on spam

Lately I've been more irate than usual at the increasing volumes of spam my server (and inbox) has to deal with. A couple of interesting articles on News.com recently on this topic: Who owns your email address? and Attack of Comcast's Internet Zombies, which give their take on different parts of the problem.

Categories: technology
Posted by diego on May 26, 2004 at 11:46 AM

muglia on longhorn

This also interesting from last week: a CNET interview with Bob Muglia, Senior VP at Microsoft, on Longhorn Client, Server, and almost everything in between. A little more information on WinFS (to add to my comments last week): apparently WinFS will be on Longhorn server, but it's unclear that it will show up on the client. The more I see the progression of the backtracking of announcements, the more it seems to me that the problem is that the initial announcement was too vague. Maybe MS would do well to not talk so much about features that haven't coalesced yet.

Categories: technology
Posted by diego on May 25, 2004 at 6:49 PM

pearpc

PearPC, a PowerPC architecture simulator for PCs. (News.com article). Check out the screenshots as well. Speed is of course a problem, and it stands to reason that simulating a RISC architecture on a CISC processor would naturally be worse than simulating CISC over RISC (as Virtual PC does on the Mac). The News.com article notes that speed is about 2.2 percent of the machine it runs on. But it's still cool. :)

Categories: technology
Posted by diego on May 20, 2004 at 9:58 AM

code that kills

Tangentially related to my ethics and computer science rant from some time ago, Scott Rosenberg has an interesting article in Salon on the problems of software on military systems. Quote:

"When everyone decides for themselves what frequency to use, what protocols to use, what standards to use, then you get systems that don't talk to each other. And it's killing us."

That sort of lament is a staple at technology conferences, and its dire language is usually a matter of executive hyperbole: Somewhere in corporate America, perhaps, a bottom line is breathing its last, and we're supposed to care.

But when the speaker is in uniform, and the incompatible systems he's describing belong to the armed forces, then you sit up straight in your seat and realize that the words are meant all too literally. As Adm. Michael Sharp of the U.S. Navy went on to say, in a talk last month in Salt Lake City, "Software errors, timing errors, can get real critical -- killing the wrong people, or not killing the right people and leaving our people unprotected."

Here's David Cook, senior research scientist at Aegis Technologies: "It has been said that, without software, the F-16C is nothing more than a $15 million lawn dart. There are stories that I know for a fact of airplanes that have been flying cross-country that land at a base that's not where they're supposed to land, and while they're there, somebody modifies the software. And the airplane flat stops in midair when they turn on the radar unit. Why? Because there are incompatible versions of this certain piece of radar software, one of which they never thought would be on that particular model."

$15 million lawn darts indeed. Tiny problem is, lawn darts usually don't run around at Mach 2, or come loaded with AIM-9 Sidewinder missiles. And that's just one example...

Categories: technology
Posted by diego on May 12, 2004 at 12:14 PM

kapor on the google IPO

Mitch Kapor writes about the terms of the Google IPO. A good read.

Categories: technology
Posted by diego on May 2, 2004 at 12:54 PM

the devil is in the details

In one of those random occurrences that are bound to happen everyday I notice there's a strange referer in my long. It looked something like this (And yes, it's not my typo):

http://ww.google.de/...
So I think about it for a second and I realize that this is something eminently reasonable for google to do, namely, make sure that common mispellings are taken care of so that users can get to what they want instead of seeing an unnecessary error. But then I got curious.

As we all know (?), DNS resolution depends on the dots of the names to separate host name from domain from master domain (e.g., .com or .org). If the machine name isn't configured in the DNS server then it will fail. So it requires a conscious act to support multiple (apparenlty invalid) domains, such as ww or w or whatever. But how where different systems dealing with this? How pervasive was the practice?

Google, for example, supports w.google.com, ww.google.com, but strangely, not wwww.google.com (which seems to me to be another possible common misspelling). Microsoft, in what would have to be characterized as their (perceived) usual disregard for end-users, supports none of the variants, either for their main site or for msn.com. So if you make a typing error, even a common one, such as leaving one "w", MS doesn't help you at all, you just get a browser error telling you the site doesn't exist (just like Google failing with four "w"s). Teoma supports all w, ww, www, and wwww (+1 for Teoma!).

Both Google and Teoma, however, leave you at the "wrong" address, which in my mind seems, well, wrong. Yahoo! goes one step further and redirects you to www.yahoo.com no matter what (Yahoo!, however, only supports ww, www and wwww, while sending you to an error page for only one w used). Overall, Yahoo wins in my book.

However, I wonder, is it really wrong when a user types it with one or two or four "w"s? Yahoo!'s behavior is to gently "correct you", but if you got to where you wanted, does it really matter? Hm.

Nevertheless, interesting stuff. The details, always the details...

Categories: technology
Posted by diego on May 1, 2004 at 11:31 PM

an internet of ends

Yesterday I gave a talk at Netsoc at TCD titled "An Internet of Ends". Here's the PDF of the slides. There are many ideas that I think are in there that finally jelled in the last few days, ideas that have been buzzing around my head for quite some time but that I haven't been able to connect or express in a single thread up to this point. I thought it would be a good idea to start expanding on them here, using this post as a kick-off point.

Yes, more later. And as always, questions and comments most welcome!

Categories: science, soft.dev, technology
Posted by diego on April 29, 2004 at 9:21 PM

CD-Rs: how reliable?

From the Independent an article on CD-Rs and their lack of long-term reliability:

You know those CD-Rs that you've trusted your most precious memories to? They could be little more use than coasters after just two years.
An interesting read, more or less along the lines of other things I've seen on the topic over time.

Categories: technology
Posted by diego on April 23, 2004 at 4:16 PM

about privacy and stuff

I read this, then this, then this, then instead of ranting I just point to this I wrote a few days ago and then go back to whatever it is I was doing. :)

Categories: technology
Posted by diego on April 23, 2004 at 1:45 PM

Yahoo! SoulSearch

Following yesterday's Google HeavenSearch post, the Onion has a story of its own along similar lines: Yahoo! Launched Soul-Search Engine. The return of the metaphysical? Hm... :-)

Categories: technology
Posted by diego on April 13, 2004 at 5:00 PM

seriously though

So, before anyone throws a fit about my Google-HeavenSearch fake news item, I just wanted to say a couple of things.

No, I'm not dismissing any of the issues. I'm just saying that it seems to me pretty hypocrytical all the immense attention that is being given to the privacy concerns (which exist) regarding Google when there are way more pressing concerns in many other areas. Other search engines. Other identity systems. Airline reservation systems. Credit card systems.

Yes, even I haven't been always raising these issues in unison, but they should be. Privacy breaks down at the weakest point, not just for Google.

For example, lots of attention has been given to Google not "guaranteeing" that an email in GMail will actually be completely deleted. But let's be realistic. Have Microsoft or Yahoo! or AOL ever guaranteed that? Not that I'm aware of.

Writing in News.com, Declan McCullagh has one of the few fairly balanced views that I've seen on the topic. He rightly points to the real problem, which is centralized systems, not Google in particular. (See also here). If you want something to be fully private, then just don't use, say, a webmail account for it.

Some of the complaints, however, have had to do with the precedent that this sets. Precedent? When companies like Gator provide software that is installed on millions of machines and basically acts like a Trojan to track behavior? When most email messages exchanged travel unencrypted through the Internet?

Come on.

Yes, these are real issues. But it's not a Google problem alone. It's structural to the kind of service being provided, part of its nature. I've tried GMail (I will hopefully have time to comment on that specifically later) and Google is doing something basically similar to others, improving on several respects (and a number of very cool ideas), changing others, and still lagging behind on some.

For me the solution is clear: decentralization, plus end-to-end encryption at the application level based on public-key infrastructure. Centralized systems have their pros and cons, as anything--and it seems that these days it is too easy to imagine that they have to be good (and perfect) for everything, and even more, that a single company has to be responsible for fixing all that's wrong in the world.

Okay, diatribe finished. :-))

Categories: technology
Posted by diego on April 12, 2004 at 7:02 PM

google announces HeavenSearch, partially disclaims deity status

This from the fake-as-news dept:

(for immediate release)
MOUNTAIN VIEW, CA, April 2004.

Google Inc. revealed today a new product called HeavenSearch (http://heaven.google.com/) that, in accordance to the company's oft-discussed reach into theological depths, will allow users to search the information contained in Paradise and alternative otherwordly venues.

"We're really excited about this product," said a Google official. "People have been talking for a while about whether Google is God and so on. And they're not totally off the mark. Our cookies see everywhere and everything, even beyond Death. Beyond Taxes too. We wanted to make this wealth of information available to users." The initial product, released as a beta (as is standard Google practice), will start off by searching through the Christian Heaven. Plans are in place, however, to provide search facilities for other major Religions' pleasant afterlife locations.

And what about Hell?

"We're not going there," said a developer that worked on the project. "Our motto is 'Do no Evil'. Obviously that precludes searching through Hell," and then noted, "we've wondered about Limbo though."

Privacy advocates were outraged at the very notion. "This is a disastrous development. Between search, mail, shopping, news, and now religion, Google's cookies are becoming all-powerful entities. The Google cookie is the greatest threat to our way of life since Oreos were invented. The Pope should be worried about the Googleplex, too."

The Vatican declined to comment.

The Google Official wondered: "So we're terrible, but, say, AOL, or Microsoft, or Yahoo aren't an issue at all? Passport? The Windows Registration System? AIM? ICQ? MSN Messenger? Ads everywhere? Pop-ups? Pop-unders? Instertitial ads? Paid-for-search results? Credit Card information in a server in Redmond somewhere? Why is it that all this brouhaha applies to Google only?"

To which the privacy advocates replied (in unison): "Oh, because AOL, Microsoft, and everyone else basically are good companies. We have nothing to fear from them!". One of them added "Plus, those companies... you know, they're pretty harmless. They just have tens of billions of dollars in cash and dominant, locked-in, fiercely defended positions on their markets. Just look at Microsoft, they just had to pay Sun Microsystems something like two billion dollars. Now they've only got 54 billion left!." Another interjected. "Right. These companies aren't God or anything like that," after which he dropped to his knees and, eyes closed, head down, hands to chin, started mumbling search-engine queries.

"I see," said the Google Official when hearing this, as he dug out some M&Ms from an open bowl on the table, then proceeded to sit down at a nearby massage chair.

So is Google, really, really God now?

"We're not like, God-God, you know? After all, we had nothing to do with the creation of the Universe and all living things." The official said, his voice vibrating in unison with the massage chair, disclaiming Google's incipient deity status, and added cryptically "That was there before us."

Categories: technology
Posted by diego on April 12, 2004 at 6:53 PM

gmail

So, finally, Google will release its email service today (NYTimes, News.com). This follows last week's launch of Google's new Look and some Lab features like Web alerts. One interesting thing: Google calculates the cost of providing a gigabyte of email storage at $2, which I presume includes the processing I bandwidth required to use the service.

The privacy question will now become even more complex:

At Google, one official said, the company has engaged in an intense debate over how extensively to exploit the content of e-mail.

Many people inside the company are worried that users might fear that the content of their e-mail messages could be used to tailor individual advertising messages, much as ad messages are now placed on pages tied to specific responses to search inquiries. Google hopes to quell any such concerns by assuring users that the content of their messages will remain private.

not to mention other additions:
Eric Schmidt, chief executive of Google, said he ``absolutely'' has plans to integrate Orkut into Google's search engine.

Another interesting thing, from the New York Times article:

It will be "soft launched," they said, in a manner that Google has followed with other features that it has added to its Web site, with little fanfare and presented initially as a long-running test.
This has a new definition of the word "soft launch"... what with an article in the New York Times and all... :)

As far as this being an April's Fools Joke... I found this press release by Google which does sound a bit, well, iffy. But then again there's a website for gmail which looks very much like the real thing (Including privacy policy, terms of use, etc). If this is indeed a joke, then some significant effort has gone into it (note that the articles include quotes from Google employees, so if it is a joke we have to presume that either a) the journalists are in on it or b) the employees continued with the 'joke' while giving unnatributed quotes). There are no disclaimers in any of the pages, and the HTML sources look clean too. Andrew (for example) is skeptical, to say the least :). I guess we'll have to wait and see.

Categories: technology
Posted by diego on April 1, 2004 at 8:18 AM

it's not as easy as it seems

An excellent Salon article triumph of the telcos, on what we rarely hear about developments in VoIP:

[...] the battle with Big Telecom is one front in a wider war on the oligarchies that dominate the world economy. But to the oligarchs themselves that war is a mere sideshow. The real fight is between Big Telecom and Big Cable, with both sides using Internet telephony as a weapon.

Categories: technology
Posted by diego on March 23, 2004 at 6:48 PM

names, or lack thereof

James Gleick, writing in the New York Times: Get Out of My Namespace. Quote:

The world is running out of names. The roster of possible names seems almost infinite, but the demand is even greater. With the rise of instantaneous communication, business spreading across the globe and the Internet annihilating geography, conflict is rampant in this realm of language and of intellectual property. Rules are up for grabs. Laws regarding names have never been in such disarray.
Indeed. And who hasn't encountered this problem? Related to this (though more technical), the book Ruling the Root is excellent.

Categories: technology
Posted by diego on March 21, 2004 at 9:54 AM

small movabletype tip

I just discovered that my RSS feeds where publishing my email unencoded. The culprit turned out to be an <$MTEntryAuthorEmail$>. As the MT Template manual explains here, this can be easily solved by adding a spam_protect parameter, as follows: <$MTEntryAuthorEmail spam_protect="1"$>. Useful and simple.

Categories: technology
Posted by diego on March 20, 2004 at 3:37 PM

boomerangs

Gavin Sheridan posted two days ago on being threatened with legal action by author John Gray. This threat came about because of comments that Gavin made in this post which basically referenced this other post on a different weblog, give or take a few words (a few words which, btw, seem to be at the crux of the matter). Now, I saw this early today over at Karlin's weblog and was going to comment on it, but over the topic was already "in play". Dan Gillmor has commented on it. Kevin Drum has comments. Dave has comments. And on and on and on it goes.

The posts in question are from November last year. In terms of the web, they were long gone. It's difficult to see them resurfacing in any meaningful way.

Except, of course, if you did exactly what John Gray's attorneys have done.

First, it seems that the threat could potentially be in murky waters legally (It appears that it's not an open and shut case, both in what relates to the statements, how they were made, and even matters of jurisdiction).

But that's not the end of it. One would assume that Gavin's post was found through a web search, and that this threat of legal action was intended to remove those references from search engines and such (after all, if something can't be found or read by anyone, where's the problem?). However, I will bet that within a couple of weeks both Google and Yahoo! (along with other search engines) will return posts and pages related to this story when searching for "John Gray" or any number of keywords (who knows maybe this entry will show up even), whereas currently that doesn't happen at all. So instead of "solving the problem" (assuming there was a problem in the first place) this action has had the effect of multiplying it by several orders of magnitude. It is even possible that this is picked up by the media somewhere. And you can imagine the headlines, right?

So why would someone do something like this? Mystery. After all, we've already seen what can happen when us weblog-folk get, um, "agitated"... :)

Categories: technology
Posted by diego on March 19, 2004 at 7:37 PM

links, influence, and networks

According to this list at BlogRunner that contains "the most influential reporters and bloggers on the web" I am in position 190. Apparently there were some complaints over the results (I say "apparently" because I have been a little removed from online conversations in the last few days), which led BlogRunner to calculate a revised listing changing the parameters by which "influence" was calculated (for example, "penalizing" by posting frequency). I thought I wouldn't show up in the new list, but there I'm in position 128!

Yeah, of course I'm surprised. But that's not the point of this post. :)

Now, I do read many of the journalists and bloggers listed there. List #1 is clearly "skewed" towards bloggers, while list List #2 would be skewed towards journalists (If I'm higher on #2 than on #1 then, does that say anything about me?). The author, Philippe Lourier, responded to comments by Daniel Drezner which noted the problems on the first list (and resulted in changes for the second list).

Philippe's comments make it clear that he doesn't think there's a silver bullet for this, and he probably agrees with Dave, who says that it "proves that trying to quantify influence pretty hard to do, and maybe not so important". (My emphasis).

Not so important in absolute terms, I agree, but definitely useful if we could just put it in the context of a single person, that is, creating (privately) my own network of influence.

A starting point would be to delve a little more into the word "influence". Influence is, in this case, a possible effect of the number of links, and so this ranking (or similar others) is more an indicator of possible influence rather than a direct measure of it (btw, I'm not saying anyone suggested otherwise, I'm just clarifying my POV before continuing).

I've been thinking a lot recently about this, in the context of social software (or, as Anne prefers to call it, sociable software). There are roughly three levels of "influence" that we can readily identify in our daily lives:

  • Personal: friends, family, etc. We are naturally more inclined to take into account and listen to ideas and opinions that come from people we know and trust.
  • Community: Community spheres are generally multiple for any given person: your neighborhood, group of online friends, bloggers you know, local politics, etc. These involve people that you don't know very well but that see their trust level increased because of the context. That is, you may not know A very well, but if B, C, and D all say that A has got a point, you might be inclined to take the idea into consideration. This could also be a form of "soft" peer pressure.
  • Media: I say "media" here lacking a good term to describe influences that are massively distributed and have (more often than not) global reach. Here the context is almost all that matters, as what makes you take the time to consider something is the forum in which it is published.
Each sphere also moves, incrementally, from action to reflection. In my personal sphere my opinions (reflection) can affect my actions directly, but less so when dealing with communities, while at the media level pretty much all I can do is rant about it and then take it as information that can filter back into my other spheres of interest. The interaction between the spheres however, is largely opaque. If I read a number of articles that move me to take action, exposing that context into the other spheres will be difficult (although people that know me very well might know "where I'm coming from").

Nice theory, but so what?

First, I think that weblogs+RSS correlate and bind these three spheres of influence tighter than before. Within my weblog I can include pointers to elements of any of the three spheres, and provide a better context for each. A lot more people in the two spheres that really matter to me as an individual ("personal" and "communities") will be able to know "where I'm coming from". Which in turn changes the dynamics of communication within those spheres of influence, both online and off.

But let's backtrack a bit further though. What are these "lists" useful for? Aside from the fleeting ego-trip they provide :), I see them as useful for finding new "spheres of interest", another form of the old "related links" or "other people who have read this also read...". In other words, community formation and (re)shaping.

With weblogs those interconnections are decidedly faint (which is why it's so hard to come up with these lists), because they in their basic form don't include relationship-value information to the embedded links. Also as a consequence of their nature, weblogs have so far defied easy definition of absolute spheres of influence, and so what weblogs are and do hasn't been overshadowed, yet, by lists of various types. Anything that relates to weblogs, in this sense, is useful in some way. To find new ideas, new discussions, long-lost groups, or what others are saying about things that interest you.

Which (finally!) brings me to the point.

What would happen, for example, if we overlapped FOAF information with those links? Wouldn't we be able to obtain a clearer picture of influence as it relates to a given person? Granted, that by itself wouldn't tell us much about the overall influence of something, but that would be a meta-value that could be obtained based on the aggregate of all those values. But for most people, their own spheres of interest/influence, and that of the people they trust the most, would be enough.

More generically, this goes back to what I said a few days ago about social networks being the glue for next-generation internet applications, as it is one more example of how the personal networks defined by these applications can give new meaning to the data that's out there that we generate, and that relates what we and others create, providing us with new windows (and ways to navigate) the vast sea of information that is the Internet, with the goal of doing something truly useful for us in concrete terms, in improving communication and exchange of ideas.

The network is not the computer. The network is not the person. And because it's neither one or the other, it can help in using the first, and help us, in small ways, in "being" the second.

Lists of connections are nice, but only as far as they are useful for something. And, as with other things, weblogs lead the way.

PS: and, before I forget, happy St. Patrick's day to all! :)
clover.png

Categories: technology
Posted by diego on March 17, 2004 at 1:30 PM

timeouts in safari

safariicon.jpg While working on the Mac my default browser is Safari, and I'm very happy with it. It is fast, looks good, and its tabs implementation is excellent. However, there was this annoying 60-second default timeout for webpage loading that apparently could not be changed (which I hit often when posting, as a combination of my slow server, lots of templates on my weblog, and movable type's way of doing things, but of course the primary reason is the server speed, no question about it). A bit of searching reveals that this is a well-known problem in Safari, and most of the search results point to a fix: SafariNoTimeout 1.0 which I've just downloaded and installed--and seems to work just fine, even in Safari 1.2.

Now, what I don't get is: have the Safari engineers (which I presume use their own browser) never, ever found a page that took more than 60 seconds to load? Weird.

Categories: technology
Posted by diego on March 15, 2004 at 12:39 PM

christopher allen on social software

Christopher Allen of Alacrity Ventures comments on social software. A good read. I've mentioned similar things in the past in other contexts, so I agree with a lot of what he says. Maybe more later when I've fully digested it [via Scoble].

Categories: technology
Posted by diego on March 5, 2004 at 12:50 AM

two weeks with a mac

blue-apple-logo.jpgIt's been almost two weeks since I got my new development machine, a Powermac G5, so I thought I'd write down some of my impressions during this time.

The first Mac I used was at my first job, one of the original Powermacs (PowerPC 601) with System 7. The experience was good, but System 7 was very good at developing System 7 apps, and running System 7 apps, but interoperability was hard. I remember I used to spend nearly all my time running X within it, since most of the work I was doing then was in Java, or C++ targeted at various UNIX platforms.

Then I used a Mac on and off over the last year (an old G3 with OS X) that we got on loan to test our software, but with only 128 megs of RAM it wasn't possible to do more than launch a program (wait... and wait... and wait...) and see how it came out. Serious debugging (of problems mostly related to layout problems) was pretty much impossible.

The G5, of course, changed all that.

Btw, you will have to forgive my sometimes starry-eyed commentary in what follows :). Even when I point out some of the problems that I've run against I sort of gloss over them; I'm sure that for others they will be more difficult to accept. So this is not terribly objective, but I think it's a good example of how the experience affected my judgment :).

First, the experience of setting up the machine is quite simply a pleasure from start to finish. The packaging is nice. Oohs and Aahs abound even as you open the box and get everything out. The machine looks nice. The cables have nice terminators that make them appear to "meld" with the machine.

Plus, there are surprises in the most unexpected places. For example, as I was setting up the LCD, I was wondering how to adjust its angle. I placed it on the desk, and looked at it for a couple of minutes. Nothing. Looked behind. Nothing that seemed to indicate how to rotate it (Mac LCDs stand on an inverted V, as opposed to PC LCDs which generally have a single stand with a swivel). I refused to look at any manuals. (Not that the Mac has many of those anyway :)). I lifted it up and tried to move it (gently), to no avail. I set it down. And then it happened: for whatever reason I pushed it slightly from the top border.

It moved.

I pushed it further, the monitor's angle was reduced accordingly. I pushed it from behind, and the angle increased.

The utter shock of that moment can't be easily explained. Here was a mechanism that simple, understated, and that worked properly when it had to. The long hours that must have gone into the excellent design of something as small (that could easily be considered "inconsequential") dropped on me like a bucket of cold water. And we tend to ignore mechanical engineering. Like turning on the machine from the monitor: there are no buttons, you just slide your finger over an area on the bottom-right of it, and on it goes.

Then, later, almost everything was were it should be. Front sockets for USB, headphones, and so on.

At around that time, during the installation, there were two things that bugged me a bit. One was the overly intrusive registration procedure that I had to go through to get the machine up and running, and which I couldn't bypass. The other was the CD tray, which could only be opened by pressing a key on the keyboard (which took me about 2 minutes to figure out). I wondered how this could affect error situations for a bit, but then I let it go.

Once I was in, the machine was more than fast, it was instantaneous. Then again, I would have been dissapointed if that wasn't the case, considering the hardware (G5, 1 GB RAM, Serial-ATA disk...). It was nice nevertheless.

The default security settings of the machine were a joke (no password for logging in, no password for the screensaver, in fact, no screensaver set up) but I quickly changed that. For finding my way around the configuration, Mark's Dive Into OS X was a good guide. Many thanks to Mark for maintaining such an excellent resource.

Safari is fantastic, I downloaded Firebird but after enabling the tabs in Safari I simply had no need for it. Safari was the default browser out of the box and it has stayed that way.

Setting up a printer was so easy it felt like cheating. We have a LaserJet 2300 with JetDirect, and while Windows was confused about it (as usual) requiring CDs, looking through the network and so forth, on the Mac it was a two-click process. Go to add printers. The LaserJet shows up. Select. Done.

After playing with some settings I updated the software (Java, iTunes, etc) and then installed Eclipse and other things I needed for development, including Xcode and the X11 server. I've had little need of anything else since then, with the exception of Office (I tried installed OpenOffice for Mac beta but it was a disaster, and it needs X11 to run... I'll just wait until they put out the native Carbon version), which meant that at some point I'll have to get Microsoft Office for Mac, and VirtualPC (also a Microsoft product). I was going to get a copy of VirtualPC in fact, but I found out that it doesn't run on G5s (!!). I guess I'll have to wait for that one.

Now for the little annoyances: coming from both UNIX/Linux and Windows, Macs are a little too "opaque". It is hard to know what's installed, and where. It's even harder to know with certainty how to uninstall things. I know that in 99% of the cases just dumping a program's folder into the Trash is enough, but what about programs that have registered themselves as MIME handlers for example? Is that taken care of automatically? I often ended up wondering if things were properly uninstalled, and sometimes checked the list of services running to make sure nothing was left as a UNIX-style daemon somewhere.

As far as the Finder is concerned, there are several inconsistencies in the navigation, and default settings are generally hard to find. Mostly it's a matter of knowing how to do something, rather than wondering if it can be done at all (Example: taking screenshots, or creating PDFs among other things).

The Command+Tab functionality that was added in OS X (which has existed in Windows as Alt-Tab for years and years) is nice, and the incredibly useful (and incredibly cool) Exposé feature is a godsend which basically negates the need of using virtual desktops.

All in all, a great experience so far. The little navigational inconsistencies might become more annoying as time goes by, but for the moment, I can live with them.

Good stuff. :)

Categories: technology
Posted by diego on March 3, 2004 at 8:10 PM

re: ethics in cs, and identity

Anne mentions an article in the recent ACM Crossroads (I have to see if I can get a version from the digital archives, thanks to my ACM membership. Which, oops, I just remembered I have to renew!). She makes some similar points to what I discussed a few days ago.

She also points to a summary of the Urban Tapestries project which says:

The key features defining the relationships our respondents had with ICTs are the importance of control (or lack of it), socio-cultural contexts, expectation management, external or internal locus of control, and personal aesthetics.

It is clear that respondents used UT in order to negotiate boundaries and mark their territories, stake claims and identify their personal preferences ... In this sense, public authoring promotes a sense of control not only over users' territories, but also over their boundaries and their own role in those territories.

It's all too possible that I'm projecting too much of my own thinking on these issues (because it could be argue that those conclusions are not specific enough--to which I'd say that you have to put them in the context of the project), but I find that data very encouraging. I might not be that crazy after all. :-)

Categories: technology
Posted by diego on March 1, 2004 at 11:46 PM

can you spell "hoax"?

I'll try to describe my thinking process in the two (2) seconds that followed reading this (and I recommend you get to the end of this post, where the mystery is revealed!).

As linked in the previous paragraph, Seb at Many2Many has posted a link to a message on the reputation mailing list where Orkut is "outed" as a "Master's thesis" of a random person who (they say) works for Orkut (the man, not the site).

Let's see. You are Google, right? You are a 1500+ person company, one of the most respected in the world, that is soon to be going public. So when a developer comes up from somewhere saying they want to do a "Master's thesis" using the company's name and reputation (including a link of "in affiliation with Google), you say "suure, go ahead". Furthermore, the student wishes to remain anonymous, so another developer (Orkut) is recruited (somehow) to use this own name and take all the credit and responsibility.

So far so good, right?

Now you launch, and after a few days pass the experiment is a success. So this student (presumably Eric Schmidt at this point, or his alter-ego) tells Google to throw a party (and he gets random people to talk about it afterwards) in a posh San Francisco location for hundreds of people. Since Orkut (the man) is still the "patsy", the party in question is announced as Orkut's idea, and makes it coincide with his birthday. To this, Google says "but of course! My pleasure!" and happily pays for the expenses (alternatively, this student is also a millionaire and pays out of his own pocket).

Luckily for everyone involved, someone posts a message to a mailing list quoting an "article" from an unknown writer with no links to an organization of any kind (even a personal site) to back it up.

But of course, it must be true!

Now you look for the source, and you discover it's this page at HACT (What do you mean what's HACT? you mean you've never heard of it? What rock have you been hiding under?).

Oh, but wait, this is the same page that, at the bottom, says: "Please note that this is a humor article and is not true in any way, shape or form, except in that it rings true in a scary way".

Damn. It wasn't true I guess. I thought that my explanation above was so incredibly reasonable, so universal, that you could post happily in indignation about it without thinking twice.

Hm. So you mean I shouldn't post about that message I read somewhere that said that Orkut in reverse spelled the name of the Alien race that actually sent this student to Stanford to do get a Masters and discover what the earthlings are up to?

PS: I find it interesting that something so patently unbelievable could disseminate at all without a ton of smileys and LOLs before and after the text.

Categories: technology
Posted by diego on February 26, 2004 at 3:56 PM

what's in an IPO?

Wired has a good article in their latest issue: The complete guide to google, which actually starts by talking about the challenges any company goes through when pre-, in- and post-IPO. Just the first section alone makes it worth reading. Oh, yes. It talks about Google too. :)

Categories: technology
Posted by diego on February 25, 2004 at 8:02 PM

the lack of an ethics conversation in computer science

In April 2000 Bill Joy published in Wired an excellent article titled Why the future doesn't need us. In the article he was saying that, for once, maybe we should stop for a moment and think, because the technologies that are emerging now (molecular nanotechnology, genetic engineering, etc) present both a promise and a distinctive threat to the human species: things like near immortality on one hand, and complete destruction on the other. I'd like to quote at relative length a few paragraphs with an eye on that I want to discuss, so bear with me a little:

["Unabomber" Theodore] Kaczynski's dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy's law - "Anything that can go wrong, will." (Actually, this is Finagle's law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.

The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.

[...]

What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD) - nuclear, biological, and chemical (NBC) - were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare - indeed, effectively unavailable - raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.

The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.

I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

[...]

Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues.

(My emphasis). What Joy (who I personally consider among some of the greatest people in the history of computing) describes in that last sentence is striking not because of what it implies, but because we don't hear it often enough.

When we hear the word "ethics" together with "computers" we immediately think about issues like copyright, file trading, and the like. While at Drexel as an undergrad I took a "computer ethics" class where indeed the main topics of discussion where copying, copyright law, the "hacker ethos", etc. The class was fantastic, but there was something missing, and it took me a good while to figure out what it was.

What was missing was a discussion of the most fundamental problems of ethics of all when dealing with a certain discipline, particularly one like ours where "yesterday" means an hour ago and last year is barely last month. We try to run faster and faster, trying to "catch up" and "stay ahead of the curve" (and any number of other cliches). But we never, ever ask ourselves: should we do this at all?

In other words: what about the consequences?

Let's take a detour through history. Pull back in time: It is June, 1942. Nuclear weapons, discussed theoretically for some time, are rumored to be under development in Nazi Germany (the rumors started around 1939--but of course, back then most people didn't quite realize the viciousness of the Nazis). The US government, urged by some of the most brilliant scientists in history (including Einstein) started the Manhattan project at Los Alamos to develop its own nuclear weapon, a fission device, or A-Bomb. (Fusion devices --also known as H-bombs-- , that use a fission reaction as the starting point and are orders of magnitude more powerful, would come later, based on the breakthroughs of the Manhattan Project).

But then, after the first successful test at the Trinity Test site in July 16, 1945, something happened. The scientists, which up until that point had been too worried with technological questions that they had forgotten to think about the philosophical ones, realized what they had built. Oppenheimer, the scientific leader of the project, famously said

I remembered the line from the Hindu scripture, the Bhagavad-Gita: Vishnu is trying to persuade the Prince that he should do his duty and to impress him he takes on his multi-armed form and says, "Now I am become Death, the destroyer of worlds."
While Kenneth Bainbridge, in charge of the test, later said at that time that he told Oppenheimer:
"Now we are all sons of bitches."
Following the test, the scientists got together and tried to stop the bomb from ever being used. To which Truman said (I'm paraphrasing):
"What did they think they were building it for? We can't uninvent it."
Which was, of course, quite true.

"All of this sanctimonious preaching is all well and good" (I hear you think) "But what the hell does this have to do with computer science?".

Well. :)

When Bill Joy's piece came out, there was a lot of discussion on the topic. Many reacted viscerally, attacking Joy as a doomsayer, a Cassandra, and so on. Eventually the topic sort of died down. Not much happened. September 11 and then the war in Iraq, surprisingly, did nothing to revive it (contrary to what one might expect). Technology was called upon in aid of the military, spying, anti-terrorism efforts, and so on. The larger question, of whether we should stop to think for a moment before rushing to create things that "we can't uninvent" has been largely set aside. Joy was essentially trying to jump-start the discussion that should have happened before the Mahattan project was started. True, given the Nazi threat, it might have been done anyway. But the more important point to make is that if the Manhattan Project had never started, nuclear weapons might not exist today.

Uh?

After WW2 Europe was in tatters, and Germany in particular was completely destroyed. There were only two powers left, only two that had the resources, the know-how, and the incentive, to create Nuclear Weapons. So if the US had not developed them, it would be reasonable to ask: What about the Soviets?

As it has been documented in books like The Sword and the Shield (based on KGB files), the Soviet Union, while powerful and full of brilliant scientists, could not have brought its own nuclear effort to fruition but for two reasons: 1) The Americans had nuclear weapons, and 2) they stole the most crucial parts of the technology from the Americans. The Soviet Union was well informed, through spies and "conscientious objectors" of the advances in the US nuclear effort. Key elements, such as the spherical implosion device, were copied verbatim. And even so, it took the Soviet Union two four more years (until its first test in August 29, 1949) to duplicate the technology.

Is it obvious then, that, had the Manhattan project never existed, nuclear weapons wouldn't have been developed? Of course not. But it is clear that the nature of the Cold War might have been radically altered (if there was to be a Cold War at all), and at a minimum nuclear weapons wouldn't have existed for several more years.

Now, historical revisionism is not my thing: what happened, happened. But we can learn from it. Had there been a meaningful discussion on nuclear power before the Manhattan Project, even if it had been completed, maybe we would have come up with ways to avert the nuclear arms race that followed. Maybe protective measures that took time, and trial, and error, to work out would have been in place earlier.

Maybe not. But at least it wouldn't have been for lack of trying.

"Fine. But why do you talk about computer science?" Someone might say. "What about, say, bioengineering?". Take cloning, for example, a field similarly ripe with both peril and promise. An ongoing discussion exists, even among lawmakers. Maybe the answer we'll get to at the end will be wrong. Maybe we'll bungle it anyway. But it's a good bet that whatever happens, we'll be walking into it with our eyes wide open. It will be our choice, not an unforseen consequence that is almost forced upon us.

The difference between CS and everything else is that we seem to be blissfully unaware of the consequences of what we're doing. Consider for a second: of all the weapon systems that exist today, of all the increasingly sophisticated missiles and bombs, of all the combat airplanes designed since the early 80's, which would have been possible without computers?

The answer: Zero. Zilch. None.

Airplanes like the B-2 bomber or the F-117, in fact, cannot fly at all without computers. They're too unstable for humans to handle. Reagan's SDSI (aka "Star Wars"), credited by some with bringing about the fall of the Soviet Union, was a perfect example of the influence of computers (unworkable at the time, true, but a perfect example nevertheless).

During the war in Iraq last year, as I watched the (conveniently) sanitized nightscope visuals of bombs falling on Baghdad and other places in Iraq, I couldn't help but think, constantly, of the amount of programs and microchips and PCI buses that were making it possible. Forget about whether the war was right or wrong. What matters is that, for ill or good, it is the technology we built and continue to build every day that enables this capabilities for both defense and destruction.

So what's our share of the responsibility in this? If we are to believe the deafening silence on the matter, absolutely none.

This responsibility appears obvious when something goes wrong (like in this case, or in any of the other occasions when bugs have caused crashes, accidents, or equipment failures), but it is always there.

It could be argued that after the military-industrial complex (as Eisenhower aptly described it) took over, market forces, which are inherently non-ethical (note, non-ethical, not un-ethical), we lost all hope of having any say in this. But is that the truth? Isn't it about people in the end?

And this is relevant today. Take cameras in cell phones. Wow, cool stuff we said. But now that we've got 50 million of the little critters out there, suddenly people are screaming: the vanishing of privacy! aiee!. Well, why didn't we think of it before? How many people were involved at the early stages of this development? A few, as with anything. And how many thought about the consequences? How many tried to anticipate and maybe even somehow circumvent some of the problems we're facing today?

Wanna bet on that number?

Now, to make it absolutely clear: I'm not saying we should all just stow our keyboards away and start farming or something of the sort. I'm all too aware that this sounds too preachy and gloomy, but I put myself squarely with the rest. I am no better, or worse, and I mean that.

All I'm saying is that, when we make a choice to go forward, we should be aware of what we know, and what we don't know. We should have thought about the risks. We should be thinking about ways to minimize them. We should pause for a moment and, in Einstein's terms, perform a small gedankenexperiment: what are the consequences of what I'm doing? Do the benefits outweigh the risks? What would happen if anyone could build this? How hard is it to build? What would others do with it? And so on.

We should be discussing this topic in our universities, for starters. Talking about copyright is useful, but there are larger things at stake, the RIAA's pronouncements notwhistanding.

This is all the more necessary because we're reaching a point were technologies are increasingly dealing with self-replicating systems that are even more difficult to understand, not to mention control (computer viruses, anyone?), as Joy so clearly put it in his article.

We should be having a meaningful, ongoing conversation about what we do and why. Yes, market forces are all well and good, but in the end it comes down to people. And it's people, us, that should be thinking about these issues before we do things, not after.

These are difficult questions, with no clear-cut answers. Sometimes the questions themselves aren't even clear. But we should try, at least.

Because, when there's an oncoming train and you're tied to the tracks, closing your eyes and humming to yourself doesn't really do anything to get you out of there.

Categories: science, soft.dev, technology
Posted by diego on February 23, 2004 at 10:27 PM

so long, ZIP

zip250.gifI'm spending a couple of hours today retiring the few backups I still have on ZIP (250 MB) and moving them over to CD-R. ZIP is just too slow for large amounts of data (at least compared to 48X CD drives) and keeping two separate mediums (CD and ZIPs) is too much of a pain. Plus, CD-R is simply too inexpensive these days to justify ZIPs (can't say I've tried the new 750 MB ZIPs, but I'm not inclined to either). I'm disconnecting the drive (which I haven't used in the last 3 months) and leaving it there for one of those just-in-case situations. After all, it is light and easy to carry around... so it's useful when making backups on the road (unless your notebook has a built-in CD-R that is--mine doesn't).

It was good while it lasted, ZIP was a great technology in its early days, and it certainly had a good 3-4 year run, considering how fast things move in storage technologies. Now to wait for the day when DVD-Rs replace CD-Rs...

Categories: technology
Posted by diego on February 21, 2004 at 11:17 AM

the new yahoo search

y-search.gif

Yahoo! has finally dumped Google as its search technology for Yahoo! Search. I like it!.

This happened as predicted a little more than a month ago. The new Yahoo! search design had been active for some time already, but using Google for results.

Now consider that Google itself is working on a new design (here are some screenshots of what it might look like, via Aaron). The new Yahoo search looks like a more modern version of what Google does today (at least to me, this is of course subjective). But Google might be changing its design soon. So Yahoo! will end up looking like a "nicer" Google, and Google will end up looking like something else. Funny, isn't it?

I can immediately tell that the results it provides are very good. Comparisons with Google's results for similar keywords show similar (though a bit slower) speeds, less obsession with trackbacks and such, and a good mix of weblog and non-weblog results. I got the Firebird/Firefox plugin for Yahoo! search and replaced my current default (Google of course, although I tried Teoma for a while, it didn't work as well). Let's see if the results are consistently good enough that they convince me to switch.

Plus: here is the link to find the search plugin for the Firebird/Firefox search bar. (Look for "Yahoo" on its own).

Categories: technology
Posted by diego on February 18, 2004 at 5:04 PM

not everything that shines is made of gold...

Mobitopia

Over the last few days an interesting story has developed in the US marketplace, namely Vodafone's bid for AT&T Wireless and then Cingular's counter-bid (Cingular won today). The economist has a couple of interesting articles on it (see Who's the real winner? and Vodafone's dilemma), noting that AT&T Wireless might be less of a prize than one might think at first sight. Problems are not only related to technology integration (AT&T Wireless runs two networks, on different technologies) but also to cost and the real potential of the US market.

The technology is moving so fast that business models are also very susceptible to shifts (e.g., is it content they're selling? Bandwidth? Hosted services? A platform? All of the above?), and so making it much more risky to potentially get stuck with old-to-new rather than new-to-next generation transitions. In my view, Vodafone might have actually been lucky in losing this bidding war. It's not just subscriber numbers that count.

Plus: some good comments on the topic over at Wi-Fi Networking News.

Categories: technology
Posted by diego on February 17, 2004 at 5:03 PM

more on demo 2004

Lots of cool announcements for Demo 2004. Big focus on weblogs and decentralized communities (which I find to be intimately linked with weblogging, in spirit at least if not in practice). As a follow up to my previous post on WaveMarket's release of location-based moblogging tool (here's Russ's own entry on the topic). Doc has a good set of pointers to what went on, but here are a couple of other things that caught my attention:

  • Feedster showed their search technology and Feedster builder, which is very cool. Congrats to Scott Johnson and the Feedster team! What they're doing with Feedster shows IMO that if the oft-maligned semantic web ever arrives it will be in the form of the gathering of information formed by decentralized self-organizing communities who provide context to infer the semantics, rather than forcing people to enter them on their own. Yes. Like weblogs are.
  • SixApart demoed a new set of moblogging tools that work both with TypePad and MovableType. Kudos as well. I must get a decent mobile phone and start playing with this new cool stuff.
If weblogging, RSS, syndication (and related technologies) didn't come of age in 2003 (which I guess some would argue, though not me), then 2004 looks like a good bet, don't you think?

Categories: technology
Posted by diego on February 17, 2004 at 4:20 PM

location-based blogging on mobiles

Congratulations to Russ and WaveMarket for the release of their new location-based blogging/information WaveIQ sharing system at DEMO 2004!

From the press release:

WaveIQ consists of three software products, all now available:
  • WaveSpotter—a cellular map interface that allows users move about, letting users drill down to street level and post or consume blogs.
  • WaveBlog—a company-hosted super blog serving as a multiple channel informational clearinghouse engineered by uber blogger Russell Beattie, WaveMarket director of blog engineering.
  • WaveAlert—wireless operator infrastructure that allows users to be notified whenever they enter or leave a designated area. The server software powers a scalable system that reduces network loads and hardware requirements.
Uber-blogger eh? :-)

Sounds very cool. Congrats again, Russ. And, when can we try this on for size? :)

Categories: technology
Posted by diego on February 16, 2004 at 1:18 AM

glancing

At ETech Matt Webb presented Glancing (slides here):

Glancing is an application to support small groups by simulating a very limited form of eye contact online. By small groups I mean about 2 to a dozen people.
Which covers part of the often overlooked area of underlying and implicit group dynamics, rather than the more overt and explicit kind. Anne has some interesting comments on it too. Very interesting ideas. Will have to think more about this (I've been saying this a lot recently both here and to myself, which is probably a measure of how many other things I have to do... so many things going on this week... :)).

Categories: technology
Posted by diego on February 13, 2004 at 11:52 PM

movabletype and db versions

Dylan has a great post on recovering from a database version change that left his MT weblog data inaccessible. Got me thinking about my recent brush with disaster, and the possibility of moving to mysql (I didn't know BerkeleyDB had problems with large DBs, and my weblog is well over 1,500 entries right now). Not with this server but, since I'm planning to switch servers soon maybe I'll do it then.

Categories: technology
Posted by diego on February 6, 2004 at 3:31 PM

mythtv

Adding to the List of Cool Things I Didn't Know About: MythTV. Sam explains the different things he's tried to get the system running. I had seen/read of other DIY PVR systems or projects, but nothing as sophisticated as what MythTV appears to be. Something else to keep an eye on.

Categories: technology
Posted by diego on February 4, 2004 at 11:51 PM

faifzilla.org

Just found faifzilla.org, home to Free as in Freedom: Richard Stallman's Crusade for Free Software, a biography of sorts of Richard Stallman. Read bits and pieces of it, very, very interesting. And: an essay on the online book by Eric Raymond (linked from the main site).

The pluses of random web navigation... :)

Categories: technology
Posted by diego on February 3, 2004 at 1:12 AM

mt-rebuild: rebuilding movabletype from the command line

My attempt yesterday at doing a full rebuild ended in pathetic failure as the normal load on the machine plus the Rebuild process meant that the page never got to the second stage. This was clearly a problem with timeouts on the web browser (through which MT is 100% controlled) because of the speed at which the process happened, rather than the process itself. So I spent some time today looking for a way to manage MovableType from the command line. I had done this before a few weeks ago but didn't get anywhere, this time I had more luck and I quickly found Timothy's excellent mt-rebuild: The rebuild script to end all rebuild scripts, which solved my problem (it did take a few hours to do a full rebuild though, which has nothing to do with the script and everything to do with the machine's load and speed) with a simple command of the form "mt-rebuild.pl -mode="all" -blog_id=xx". Only comment I'd have is that it doesn't seem to have a switch to provide feedback, so you don't know what's going on, but so what, it's not as if it's a consumer application or anything.

Yes, this is old hat (release date was almost a year ago) but I missed it when it came out and we know how it is with the web and its tendency to bury yesterday's news under a new avalanche of discussion, comments, posts, news, and other interesting stuff :).

This is exactly what I needed, thanks Timothy for making it available!! His other MT plugins are pretty cool too, including mt-publish-on, which I'll check it out when I have the time, since I've talked about something like it before.

Good stuff.

Categories: technology
Posted by diego on February 2, 2004 at 10:04 PM

my wired | tired | expired

Since I thought the latest wired | tired | expired (which I linked to in the previous entry) was pretty lame, I decided to write my own. :) Here it goes.
WiredTiredExpired
Ultra-widebandBluetoothIrDA
PetaTeraGiga
Mobile broadcastingMMSSMS
BitTorrentTiVOTV
Radians< 6 degrees6 degrees
RSS adsAdSenseBanner Ads
IntroversterOrkutFriendster
Categories: technology
Posted by diego on February 2, 2004 at 11:18 AM

microsoft and google

Still catching up on some of yesterday's articles that I left open for reading later (does it show?). From the New York Times comes the shocking (shocking I tellsya!) revelation that Microsoft is taking on Google. Seriously though, quote:

"We took an approach that I now realize was wrong,'' [Bill Gates] said of his company's earlier decision to ignore the search market. But, he added pointedly, "we will catch them.'"
"We will catch them." Simple and to the point, don't you think? The comparisons with Netscape are the order of the day of course. Yahoo! gets a short mention (less than what it deserves IMO, after all, they are probably the one company aside from MS that has the technology reach and depth in the area to be a big factor, as they are in fact today--AOL doesn't quite have the tech know-how to make the list, even if they have the millions of users. Still, there's a few other interesting tidbits of information in the article that make it worth reading.

Categories: technology
Posted by diego on February 2, 2004 at 12:21 AM

server platforms: choice, or lack thereof

One of the things we're doing in the process of upgrading our infrastructure is getting a new public server and a development server. And one of the biggest questions is (as usual) which platform to base it on.

The first problem that comes up is that pre-install choices for OSes are pretty much limited to either Red Hat Linux or Windows Server 2003 (In some cases Windows 2000 Server). For obvious reasons, the development server should be a copy of the deployment server, which means that if, if you have different providers, you have to settle for the "intersection" between the various offerings.

Note: all of these comments are taken from the POV of multiplatform server applications (Java, Perl, etc) with small server clusters. That is, if I say, "Windows and Linux are roughly similar in X" it means roughly similar in that context. Java in particular is sort of an equalizer in that sense, in fact helping Microsoft by taking Windows Server more up to par (who would have thought?). Additionally, it's only been in the last few years that Sun has been more aggressive in ensuring parity between platforms, there was a time (say, 6, 7 years ago) when Sun's Windows VM was the best all-around VM (remember it took a while for Sun to implement Java native threads on Solaris?). I have no doubt that in other contexts, for other types of webapps or webservices, or at different scales (say, deploying 100 servers instead of 10) both the parameters and the results of comparing these choices would be quite different.

Some might say that Windows "is not an option", but I prefer to make decisions based on objective information when possible (with personal preference a factor, of course, but not an overriding one), and I think it depends on how much money/resources you've got to deal with it. As far as one Internet server is concerned Windows is more expensive than Linux but not by a huge amount (as a comparison, Dell.ie pre-installs Windows 2003 Server for Euro 700, and RH9 for Euro 170). But when you deploy it in an internal network you have to start thinking about Client Access Licenses (CALs) which cost $50 a pop or more, installation licenses, and so forth, and this is where the "resources" come in: if you install Windows you need someone to spend a lot of time figuring out licensing and making sure that you are using the licenses properly, etc. So aside from Windows being more expensive, it's also more difficult to manage from the licensing point of view. Additionally, going beyond a few servers complicates matters even further. Needless to say, startups usually will not have the time for that. I know we don't.

A few years ago I deployed+used Windows Server in the company I was working for then and it worked okay (that is, it did what it had to do, nothing earth-shattering). The company was basically using Windows on the client (IE was the primary target platform) and that sort of dictated that the servers be Windows as well, particularly because there weren't that many servers. Downsides were mainly that you had to keep up with the neverending stream of security updates and the licensing stuff, but in that case there was a person who took care of the IT infrastructure.

I haven't used Windows 2003 Server, but Windows 2000 was pretty stable (Again, all of this in the context of multiplatform server apps that connect to an SQL DB in the backend). I think it's interesting that now that Windows is more on par in terms of stability and features (clustering, remote terminal, etc) with Linux what really becomes a bigger barrier of adoption is both the price and the "management" of licenses. Linux is just easier in that sense: make as many copies as you want, install as many servers as you need, with as many clients as you like. Microsoft is, in effect, shooting itself in the foot (for a certain segment of the audience at least) by making their licensing more convoluted than it should be.

Now, though, the comparison has become a little more complicated, since Red Hat has discontinued Red Hat Linux and split the process in two. On one hand we've got Fedora, which is sort of the "spiritual heir" of RH9, and on the other hand you've got Red Hat Enterprise Linux (RHE for short) which is their commercial offering. RHE comes in three flavors: WS (Workstation), AS (small-medium server) and ES (enterprise servers). As far as servers are concerned, the main differences between AS and ES are (btw, the information on RH's site isn't nearly as clear as it should be):

  • AS supports only x86, while ES also supports Itanium, AMD64, and others.
  • AS supports only one processor, while ES supports 1+.
  • ES has round-the-clock tech support, while AS's is more limited.
  • ES supports more than 8GB of memory on x86, while AS does not.
The problem with any of these options is, of course, that you have to pay a subscription that on the surface rivals the price of Windows Server. I say "on the surface" because the licensing is simpler, you basically pay the subscription, you get the updates when you want, and that's it. One subscription per install, and everything else is pretty much Linux as usual, which is substantially simpler (and so less costly) than Windows. (Btw, Russ also had some comments yesterday on his views on the Red Hat transition).

Ah, but why not go with another Linux distro you say? SuSE? Gentoo? or Debian? (Debian has lots of fans :)). Why not FreeBSD? Well, here we are back to what I mentioned at the beginning, that many providers pre-install either Windows or Red Hat. They don't preinstall, say, SuSE, or whatever.

Fine, you'd say: get a clean machine and install your favorite free distro yourself.

Which is an option, yes, but, but... if you are getting dedicated servers that are pre-installed on a remote location you're faced with the prospect of either a) going to the remote location to do the reinstall or b) doing the reinstall remotely, without being at the console. Both are possible (maybe b is not in some cases) but neither is very appealing, especially when you want to work on what you need rather than spend time making the OS boot properly. If we had more time to work on that things might be different, but that's not the case.

So.

For the moment, Red Hat is a better option for this kind of usage, since there is an upgrade path (even if it's convoluted) to either RHE or Fedora (although Fedora might be too much in flux as a distro to base a production system on--I guess that time will tell). If these moves by Red Hat make some hardware or Internet providers pre-install systems other than Red Hat there might more choice in the future (for when you need things to "just work"). And, if Microsoft changed their Licensing scheme to be something like "here, sign this two-page contract and pay us $500 a year and you get your updates and you can use this with as many clients as you want, etc" (kind of like what Sun did with their new licensing scheme) then Microsoft would be more of a contender in this area I think.

Oh, and btw, either Mac OS X Server or Solaris are indeed good options, but the hardware is simply more (sometimes a lot more) expensive, at least here in Europe, which makes it difficult to justify cost-wise. I hope that someday I'll understand why, if everything is built in factories all over the world, shipping stuff to Dublin instead of doing it to Palo Alto, CA makes prices jump 40%. Oh yeah, and software is more expensive too. I guess that the bits get tired from all that swimming and have to be compensated somehow. :-)

Categories: technology
Posted by diego on January 31, 2004 at 12:25 PM

yet another email virus

"Fastest ever to spread", so far at least. Some coverage here from News.com. I have some thoughts on this, but will leave them for later (busy busy!).

Categories: technology
Posted by diego on January 27, 2004 at 10:18 PM

the impact of the macintosh

On my entry about the Mac's 20th birthday there were a couple of comments that are interesting enough to echo here.

First, Chris posted a link to the 1984 commercial (thanks Chris!).

Second, Doug posted a long comment that I'll quote verbatim before replying:

As I recall, the '1984' commercial did not have much impact. It was advertising a product that most viewers had never heard of, but failed to introduce the product or suggest any of its benefits. The commercial was never run again.

Also, the Mac wasn't really that 'innovative'. It didn't have much of anything that the Apple Lisa didn't have... except that the Mac was at least somewhat affordable.

Finally, I would note that the 1984 Macintosh was generally considered to be a flop. Sales were abysmal when compared with the then-ancient Apple ][ -- it had taken 74 days for the Mac to sell its first 50K units, but when the IIc was introduced a few months later, Apple sold 50K of them in 7 hours. Worse, Mac sales were miniscule when compared with the IBM PC. The Macintosh failed to stop Apple's decline from #1 microcomputer maker to tiny niche player. The 9" black-and-white low-res screen and undersized keyboard marked it as a rich man's toy.

It was the introduction of the Mac II in '87, with available 13" 640x480 color display and a real keyboard, that finally gave Apple a system that was attractive to professionals.

What Doug says is all true. However, I disagree with his implied conclusion (that the Mac, or the launch even, weren't as important as they were).

Specifically on the points Doug mentions. The Lisa was about one third of the price of a Xerox Star. The Mac was one-fifth to one-fourth of the price of the Lisa. The Mac-to-Lisa jump was done in part due to hardware advances, but more importantly, due to top-notch engineering. Price matters. Also, the Mac improved on the Lisa in several aspects and to a degree it was an independent project that was running in parallel and that Jobs took over when the Lisa project started to sink under its own weight.

Regarding sales, well, the first Mac was in part underpowered (a year later the addition of a hard drive among other things made the product a much better proposition, and it sold accordingly), which hurt its sales. But at the heart of the difference in sales there's also the "Apple factor": not licensing the OS, using proprietary components, pushing for very high margins, etc., which had a big effect IMO.

As far as the impact of the 1984 commercial is concerned, I would just ask how many other commercials for computers are still known to the level that one is, and leave it at that.

Finally, if we start comparing things on the basis of what had already been proposed or developed to a degree (but not seriously marketed) before a product was launched, then the Lisa wasn't really that 'innovative' either because it borrowed a large number of concepts from the Xerox Star (which in turn borrowed from Engelbart's work), as I mentioned in the entry. There's a big differecence between doing something for a tiny audience or playing with it in a lab and designing it so you can manufacture hundreds of thousands of units a year of it. We could also argue (for example) that as crucial as the Mac II was the Laser Printer and PageMaker, which made the idea of "desktop publishing" a reality.

The original Mac planted the seeds for what was to come, and the Mac II and everything after it, on all other sides of the aisle (yes, for example, Windows), resembles to a large degree what was in that "rich man's toy" as Doug describes it.

A "desktop". Icons. Mouse. Graphical filesystem navigation. Applications running inside windows. Menus. Bit-mapped graphics. The idea of a common "Look and Feel", established through published guidelines (this one is fading now though, what with our modern skinnable apps and such :)). An API (The Toolbox) for developing apps against it, with high-level OS services. Most of these things existed before it, but the original Mac had it all, and in some respects it was several years ahead of its time (we all talk about computers as "appliances" now given the right context, but Jobs always saw the Mac as that, a machine that could sit comfortable in any living room without looking out of place).

What is sad is that we haven't really moved too far beyond those basic ideas, particularly since many of them were not designed for the massive amounts of information we deal with today (say, the number of files on our hard drives) and so in some sense some ideas have been as much a problem as a solution. But that's how it played out.

The impact of the Mac is therefore, in my opinion, difficult to underestimate. It defined what computers should be rather than bringing up a fancier version of what they were. And that's what's important, I think.

Update: [via Peter] Andy Hertzfeld's Folklore page dedicated to the early days of building the Macintosh. Very, very cool.

Categories: technology
Posted by diego on January 26, 2004 at 12:08 PM

apple: beyond the mac

An News.com article that looks at the past and present of Apple and how its recent move into "digital lifestyle" products might play out. Interesting read.

Categories: technology
Posted by diego on January 26, 2004 at 12:25 AM

the mac turns 20

macintosh.jpgAlmost forgot: January 24, 1984, was the launch of Apple's Macintosh. Yes, a lot of the ideas were already present in Xerox's Star, but the Mac did have include many inventions (I guess that today we'd call them "innovations") and it did mean that all of these things were within the realm of the affordable. More importantly, the Mac forced the rest of the industry to improve.

Some time ago I found online a copy of the famous '1984' ad that launched the Mac, directed by Ridley Scott. It's really great. I can only imagine the impression it must have caused at the time. I mean, a computer being advertised, quite literally, as part of a revolution? These days all "revolutionary" icons have been turned into marketing gimmicks (say, images of Che Guevara being used to sell T-Shirts, Jeans and such), but 20 years ago that must have been quite the thing to see. And to sell a computer no less.

One more thing: something interesting from this CNN article:

Twenty years ago, on January 24, 1984, Apple Computer launched the Macintosh. It contained virtually unknown features, including simple icons, and an odd little attachment called a mouse.

Many newspaper stories at the time had to include a definition. Silicon Valley's newspaper The San Jose (California) Mercury News, for example, described the mouse as "a handheld device that, when slid across a table top, moves the cursor on the Mac's screen."

Heh.

Categories: technology
Posted by diego on January 24, 2004 at 11:12 PM

comment spam filtering - it's all about the IPs

Sam describes his new comment spam filtering system. Quote:

Then it struck me: from an ip address I have never seen before. Light bulb.

Hit and run. Strangers. People who never have been here before. These people are unlikely to be seen again.

And if they don't come back, it is not possible to have a two way conversation, is it?

So I set to work. I wrote a script to scan my Apache logs for everybody who has ever visited my weblog within the last week. Bots, aggregators, and an occasional carbon based life form, I make no distinction.

Then I add in everybody who has left a comment in the last ninety days. And not just ip addresses, but also urls.

All these people are welcome to comment freely.

Very, very cool idea! Use the logs to establish an implicit community effect, a kind of automatic self-updating whitelist. It leaves me thinking "and where else can this be applied?". Mmm...

Categories: technology
Posted by diego on January 21, 2004 at 11:43 PM

rethinking weblogging

[via Danny] Bruce Ecker: rethinking weblogging (and everything else). Great piece. Hard to summarize. Go read it! :)

Categories: technology
Posted by diego on January 21, 2004 at 11:39 PM

digital law conference @ Yale

The other day I got an invitation to the CyberCrime and Digital Law Enforcement conference to be held at Yale Law School, March 26-28. I won't be able to attend, but it looks interesting, with some good sessions and speakers. Mr. Lessig would seem to be missing from the roster of speakers, but Jennifer Granick (who is also at Stanford Law School) will be there.

Categories: technology
Posted by diego on January 18, 2004 at 8:05 PM

lower DSL prices in Ireland?

Karlin reports that:

Eircom said today that it has applied to ComReg to further lower the price of its main broadband DSL product, from Euro 54.45 to Euro 39.99 monthly (both incl VAT). Good news; I can't imagine ComReg will refuse.
(That's from USD 70 to USD 50 at current exchange rates, for those on the other side of the Atlantic!). Good news indeed. Fingers crossed...

Categories: technology
Posted by diego on January 16, 2004 at 10:49 AM

file sharing = piracy? Not really.

An interesting Salon article: Is the war on file sharing over?:

If one is willing to believe the happy talk from music business executives, the tide has finally turned against file sharing, thanks to the get-tough tactics employed by the Recording Industry Association of America.

Last fall, the RIAA began filing lawsuits against individual users of peer-to-peer trading sites, and the strategy, the RIAA says now, has paid off. The group is careful not to declare a final victory over file trading, but things are finally beginning to look up for a business long in decline, say industry representatives. After years of scoffing at copyright laws, Americans are finally beginning to understand the gravity of file trading's offense against copyright.

The article is interesting. But what I find most interesting is this automatic alignment that is made in the media discourse between file sharing and piracy. There are many, many uses other than those the RIAA defines as illegitimate for file sharing (note, I am not saying anonymous file sharing, although there worthy uses for that too). Sure, the media loves a good fight and that's why the focus on this comparison. But the uses of sharing should, can, and will move beyond those in dispute. And not just for files, either.

Why am I saying this? Well, can't you guess?

Stay tuned. :-)

Categories: technology
Posted by diego on January 15, 2004 at 3:02 PM

the story of an outage

a tale of mistakes, backups, recovery (by a hair), and why permalinks are not so permanent after all

out·age (ou?tij) noun

  1. A quantity or portion of something lacking after delivery or storage.
  2. A temporary suspension of operation, especially of electric power.

    When I woke up yesterday after a brief sleep I started to log back in to different services and as I'm seeing something's funny with my server, Jim over at #mobitopia asks "is your site down?".

    Damn.

    As I checked what was happening, I could see that all sorts of things were not working on the server. I was starting to fear the worst ("the worst" in abstract, nothing specific) when I remembered that I had seen similar symptoms a couple of months ago, and back then it had been a disk space problem. I run "df" and sure enough, the mountpoint where a bunch of data related to the services (including logs) is stored was full (since November the number of pageviews a month has increased to over 200,000, which creates pretty big logfiles). As the last time, the logs were the culprits. Still half-asleep, I start to compress, move things around and delete files, when suddenly after a delete I stop cold: "No such file or directory".

    What? But I had just seen that file...

    I look up the console history and four rm commands had failed similarly.

    Uh-oh.

    I run "pwd". Look at the result. "That's not right...". I was not where I thought I was.

    At that point, I woke up completely. Nothing like adrenaline for shaking off sleepiness.

    I look through the command history. At some point in my switching back and forth from one directory to another, I mistyped a "cd -" command and it all went downhill from there. Adding to the confusion was the fact that I used keep parallel structures of the same data on different partitions, "just in case". I stopped doing that once I got DSL back in May last year, opting instead to download stuff to my home machine, but the old structure, with old data, remained. And, even more, my bash configuration for root doesn't display the current directory (the first thing I did after I realized that was add $PWD to the prompt, but of course by then it was too late).

    I had just wiped out the movable type DB, the MT binaries (actually, all the CGI scripts), the archives, and a bunch of other stuff in my home directory.

    I took a deep breath and finished creating space, and moved on.

    First thing I did was restart the services, now that disk space wasn't longer an issue. Then I reinstalled the binaries that I had just wiped out, which I always keep in a separate directory with some quick instructions on how to install them. That turned out to be a lifesaver, one of the many in this little story.

    After that I put up a simple page that explaining the situation (here's a copy for... err... "historical reference"), plus a hand-written feed and worked on the problem in breaks between work.

    Then I realized that all the links that were coming in from the outside (through other weblogs, google, etc) were getting a 404. So as a temporary measure I redirected the archive traffic to the main page through a mod_rewrite clause:

    RewriteRule /d2r/archives/(.*) /d2r/ [R=307]
    That would return a temporary redirect (code 307) while I got things fixed (one fire out! 10 to go).

    So what next? The data of course. When I came back to Ireland at the beginning of January I started doing backups of different things (a "new year, new backups" sort of thing), and I backed up all the server data directories on Thursday, and then on Saturday I did what I thought was a backup of my weblog data, through MovableType's "Export" feature. As things turned out, the latter proved useless, and it was the "binary" backup that saved the day.

    Why? Well, as I started looking at things, I went to MT's "import" command in cavalier fashion and was about to start when the word "permalink" popped up in my head. Then it grew to a question: "What about the permalinks?".

    The question was valid because my permalinks are directly based on the MT entry ids. Therefore, if an import changed the entry IDs, it would also break all the permalinks. I started cursing for not switching over to using entry-based strings for permalinks, but that didn't help. So I did a little digging and I realized that I was right. MT assigns entry IDs on a system-wide basis. So if you have multiple weblogs on the same DB (which I have, some of them private, some for testing, etc) OR if you have to recover the data from an export (which I had to do) you're out of luck. More likely than not, the permalinks will not work anymore. The exported file did not include IDs. Re-importing would generate the IDs again. Different IDs. Different links. Result: broken links all over the place, both within the weblog and from external sources.

    This is clearly an issue with the MT database design, which doesn't seem too well adapted to the idea of recovery. To be fair, however, I am not sure how other blogging software deals with this problem, if at all. I think this is one big hole in the weblog infrastructure that we haven't yet completely figured out, both for recovery and for transitions between blog software (As Don noted recently).

    This is when I started thinking that things would have been much easier if I had written my own weblog software. :) That thought would return a few times over the next 24 hours, but luckily I was busy enough with other things not to indulge in it too much.

    After looking online and finding nothing on the topic, I came to the conclusion that my only chance was to do a direct restore of the "binary" copy (that is, replacing the clean database with the backup directly) I had from last Thursday. I did the upload, put everything in place, and things seemed to go well, I could log in to MT and the entries up to that point where right where they had to be. So far so good. I was going to do a rebuild and I thought that maybe now was a good time to close off all comment threads in all entries (to avoid ever-increasing comment spam) and I spent some time trying to figure out how to use the various MT tools to close comments on old entries. However, they all seem to be ready for MySQL rather than BerkeleyDB. It wasn't a hard decision to set it aside and move on.

    So I started a full rebuild. The first 40 entries went along fine, albeit slowly. Then nothing happened. Then, failure. I thought for a moment that, for some strange reason, the redirect I had set up yesterday was causing the problem, so I removed it, restarted the server, and Tried again. Failed again. No apparent reason.

    I got angry for a second but then I remembered that the "binary" backup was of everything, including the published HTML files. Aha! I uploaded those,crossed my fingers, and did a rebuild only of the index files, and everything was up again. Actually, this was important for another reason, since the uploaded images that are linked from the entries end up by default in the archives directory, you need a backup of that or the images (and whatever else you upload into MT) will be gone if you lose the site.

    So the solution up until this point had been a lot simpler than I thought at the beginning.

    But wait! All the entries after last Thursday were missing, and I didn't have a backup for those. That was when RSS came to the rescue in three different forms: 1) I download my own feeds into my aggregator, so there I had a copy up to a point. 2) Some kind souls, along with their condolences for the problem, sent along their own copy of the latest entries (Thanks!!--and Thanks to those who sent good wishes as well). 3) Search engines, (Feedster was the most up to date--btw, it was Matt that suggested yesterday, also on #mobitopia, that I check out Feedster as a source of information, a great idea that really applies to many search engines if their database is properly updated), had cached copies that I could use to check dates and content. So armed with all that information I set out to recreate the missing entries.

    Here the problem of the permalinks surfaced again. I had to be careful on the sequencing, or the IDs wouldn't match. So I re-created empty entries, one-by-one, to maintain the sequencing (leaving them unpublished), actually posted a couple of updates of what was going on, and then I published the recovered entries as I entered the content and set the right dates.

    So. All things are restored now (except for the comments from the last week, which are truly lost--this makes me think that setting up comment feeds would be a good idea. However, that doesn't address how would I recreate the comments given what happened. Would I post them myself under the submitter's name? That doesn't seem right at all. Another problem with no obvious solution given the combination of export/ID issues with MT).

    What's strange is that there's been slight a breakdown in continuity now, because I did "post" some updates to that temporary index file, but it couldn't be part of the regular blogflow. Hopefully this entry fixes that to the extent possible.

    Okay, lessons learned?

    1. Backups do work. :) I am going to do another full backup today, and I'll try to set up something automated to that effect. (Yes, I know I should have done it before, but as usual there are no simple solutions, and then you leave it for the next day... and the next...). Plus, backups for MT installations, should always be both of the DB and the published data, to make recovery quick. (I have about 1500 entries, which amount to something like 20MB of generated HTML--additionally, the images are posted directly on the archives directory, so if you're not backing that up, you've lost them).
    2. For MovableType, the export feature is not so great as far as backups are concerned. The single-ID-per-database problem is a big one IMO, and I don't think MT is alone in this. We need to start looking at recovery and transition in a big way if weblogs are going to hit the mainstream (and we want permalinks to be really permanent)
    3. Solutions are often simpler than you think, if you have the right data. Having a full backup makes recovery in this case easy and fast.
    4. This stuff is still too hard. What would a less technically-oriented user do in this situation? Granted, it was my knowledge (since I was fixing stuff directly on the server) that actually created the problem in the first place, but there are lots of ways in which the same result could have been "achieved", starting from simple admin screwups, hardware failures, etc.
    Overall, this has been a wake-up call in more than one sense, and it has set off a number of ideas and questions in my head. How to solve these problems? I'll have to think about it more.

    Anyway. Back to work now, one less thing on my mind.

    Where was I?

    Categories: technology
    Posted by diego on January 14, 2004 at 5:45 PM

back up--partially

Okay. Getting back to normal now. Everything seems to be back up until last thursday. Now I'm recovering the entries I had posted since then. Entries will start showing up as I update them. And I say "update them" because I had to create the entries first as empty drafts to maintain the ID sequencing. Many lessons learned from this whole thing. More later.

Categories: technology
Posted by diego on January 14, 2004 at 2:59 PM

on OPML

On my previous entry about Postel's Law, Danny Ayers made a comment, and to a part of it I said I'd reply in a separate entry. To make the question clear, I'll restate it here in a different way to involve the technology only.

Essentially, Danny was asking "If you have used OPML, would you agree that OPML does not follow this route you are advocating of adding as many constraints as possible to a spec, to make interoperability easier?" (Danny, if I misunderstood the question please let me know, but I'm pretty sure that was the essence of your comment, personal matters aside).

My answer to that question would have to be no, I do not agree. Let me explain.

I have implemented both readers and writers of OPML when used for RSS subscription lists for an end-user product (ie, clevercactus). And there is one main point that I've found frustrating, namely that the attributes used on the "outline" element vary between tools. I have previously noted, in another context the elements that would "complete the spec" by properly specifying these attributes.

However, I've come to the conclusion that this is not a problem with the spec itself, but rather a problem of what are we using it for. As far as I can read in the spec, it was designed to be a very simple and flexible storage mechanism. The first sentence in the spec says "This document describes a format for storing outlines in XML 1.0" (my emphasis). It doesn't say "This is a format for interchange of outlines" or anything like that.

That is, creating an interoperable format for RSS subscription lists was not part of the original "charter" of OPML.

Which is why I can't agree with Danny's statement, because the interoperability problems we all know about pop up when using OPML outside of its original intended domain.

As such, that is, as a format for local storage of outlines, the OPML spec might have done a good thing by keeping things very open. Note that the spec explicitly says, in its goals: "Outlines can be used for specifications, legal briefs, product plans, presentations, screenplays, directories, diaries, discussion groups, chat systems and stories." -- that's a big set of apps, and I'd be hard pressed to define a consistent set of common attributes for all of them. To be honest, if it was me designing it maybe I would have chosen a different path (like for example target less applications), but that's not really the point. Design is at its core subjective.

So. Given that OPML was not originally designed as an interoperable way to store feed subscription lists, the current situation is logical, almost predictable. It seems to me (given what I've seen--I might be wrong of course) that this is a use of OPML that grew in ad-hoc fashion and as such created some incompatibility problems. But is this a problem with OPML itself? I don't think so. Usage grew beyond its original intended target, and things got a bit messy.

Okay, that's my answer to Danny's question, but I just want to be clear on what I think about OPML given the current situation, as what I said above might seem a bit too ... err... "theoretical".

That is, we still have the interoperability problems for feed subscription lists.

However, now that it's clear that it has become accepted for that use, I noticed that Dave recently put up a short RFC that clearly states "Using OPML to exchange subscription lists". My comments from October last year would, then, apply in this new context, and the new RFC already covers part of them (the most important in my mind, which is the issue of standard attributes).

This new spec of "Interchangeable OPML Subscription Lists" plus the OPML spec itself (which doesn't necessarily need to change, since it is still relevant in its original intended domain) make a simple combined spec that is useful and already deployed (granted, some aggregators might be generating different attribute names that those on the RFC, but that's a tiny change, and none of the other items under discussion that I'm aware of are in any way "deal-breakers").

Hence, OPML applied to the domain of feed subscription lists in particular is a good solution, simple and to the point. And to me that's what matters: if something does what I need, it's simple, and it works, I'm all for it.

Categories: technology
Posted by diego on January 12, 2004 at 1:19 AM

syndication-land happenings

A week has gone by since I came back and I'm still catching up with some of the things that have happened in syndication-land. For example, recently, Dave released a new "subscription aggregator" (meta-aggregator?) that allows members to see who's subscribing to who and do other interesting things. I haven't had time to digest all of its implications so I'll leave the comments for later, but from what I understand (and I haven't registered yet) it looks very cool.

Another example: Feedster has released an RSS feed for their "Feed of the day" feature, and they are doing a number of other interesting things that I haven't yet had time to fully explore. And, btw, my weblog was one of the first "feeds of the day" that was chosen (look in this page near the bottom). This happened months ago! Totally missed it. Thanks!

More later as my brain processes it. :)

Categories: technology
Posted by diego on January 11, 2004 at 1:20 PM

the digital life

Today's New York Times magazine has an article, My So-Called Blog on weblogs and the impact of our digital lives on the "real" world:

When M. gets home from school, he immediately logs on to his computer. Then he stays there, touching base with the people he has seen all day long, floating in a kind of multitasking heaven of communication. First, he clicks on his Web log, or blog -- an online diary he keeps on a Web site called LiveJournal -- and checks for responses from his readers. Next he reads his friends' journals, contributing his distinctive brand of wry, supportive commentary to their observations. Then he returns to his own journal to compose his entries: sometimes confessional, more often dry private jokes or koanlike observations on life.

Finally, he spends a long time -- sometimes hours -- exchanging instant messages, a form of communication far more common among teenagers than phone calls. In multiple dialogue boxes on his computer screen, he'll type real-time conversations with several friends at once; if he leaves the house to hang out in the real world, he'll come back and instant-message some more, and sometimes cut and paste transcripts of these conversations into his online journal. All this upkeep can get in the way of homework, he admitted. ''You keep telling yourself, 'Don't look, don't look!' And you keep on checking your e-mail.''

Well, we've all been there, haven't we? Okay, many of us have. Okay, would you believe me if I said I have? :-)

The article has a certain focus on "teenagers" or "young adults" for some reason. But that aside, it has some interesting comments and some good insights that apply to all groups I think. Everyone that is involved with new tools (using them... building them... whatever...) is trying to feel their way around.

And this is just text, and maybe pictures. A video here and there at most. And that creates a certain tension IMO, which won't really be gone until we can superimpose cyberspace with meatspace.

Right now if you want to "be online", mostly (and I emphasize mostly, as we all know that you could be IM'ing on your cellphone these days) you need to be sitting at a computer, and that means not being with others, or doing other things. The display, the keyboard, the whole UI experience pulls us in and demands a large part of our attention.

Result: a disconnect.

But, as I said, if the "real" (I keep putting real between quotes because I'm a subjectivist) and "virtual" worlds were superimposed things would be different. When that superimposition happens, there will be very little tension between interacting digitally and otherwise.

How do I mean? Science Fiction moment: You look at a restaurant and your glasses (or a retinal implant) superimpose a translucent image of its website. You get a person's business card and it contains a bluetooth chip that tells your PAN (Personal Area Network) about the person's email, etc, and their company webpage pulls up next to their smiling face and you see there that the product he's talking about hasn't been released yet. Or you have embedded a few key details into a wireless implant in your arm and everyone that sees you through the glasses (or the implant!) can see your weblog too, and see that you just posted a picture of them, taken with your cameraphone. Posting, browsing, and chatting, all from your local pub, pint in hand.

Okay, this is pretty lame as science fiction goes. I should brush up on my Snow Crash, Neuromancer, The Diamond Age, and all the rest...

Wait a minute. How did I get here from a New York Times article? Oh Well. :-)

Categories: technology
Posted by diego on January 11, 2004 at 12:50 AM

bittorrent is nice, but...

...there's always a 'but' isn't there?

I had attempted to use BitTorrent a couple of times before, but never spent more than a few minutes with it, not enough to understand what was going on. Yesterday night though, I gave it a little more time and some tips from Russ and Matt I could get it going. I had to adjust some settings, such as the bandwidth allocated for uploads, which defaulted at 12 KB/sec and immediately started to suck up my entire upload capability (I set it at 7 KB/sec). I chose a couple of files (three actually) and let it download overnight. This morning, things were well on their way, two files done, the remaining one halfway through. But then it hit me: my transfers are limited!

I have a 4GB transfer limit (as it's common here in Ireland) on my DSL connection. So now I have downloaded, in one day, over 1.5 GB of data, and still have 1 GB to go. Then, there's the uploaded data, which also counts. EEk! By the time the second transfer is finished I will have spent over 75% of my monthly bandwidth allotment. With 60% of the month still to go!

Damn. I want to go back to my good old days of DSL in the Bay Area, where I had a symmetric 768 KB/sec DSL connection, with no transfer limits, at $40 a month. Okay, that's not realistic. :) But on the other hand, until transfer limits are removed (or at least raised) here, I won't be able to do much with BitTorrent. Too bad.

And, btw, this clearly has to have an impact on broadband usage. Forget about BitTorrent specifically, other types of media transfers are also quite heavy, and having that sword hanging over your neck (the sword being whatever they charge per megabyte after you cross the transfer limit) users will be more likely to treat broadband as a kind of always-on modem, rather than as true broadband. Ireland is great, for technology in particular, but it definitely needs some serious improvements to both infrastructure and access to that infrastructure (see my post on mobile handset costs yesterday) to be truly competitive. There's a qualitative jump (both on the supplier and the consumer side of a market) that happens when connectivity is pervasive, always-on, fast, and relatively inexpensive, and Ireland isn't there yet. Here's hoping we won't have to wait much longer.

Categories: technology
Posted by diego on January 8, 2004 at 3:30 PM

handset prices in ireland

Mobitopia

A few months back I commented on the user UNfriendly behavior of mobile operators in Ireland when it came to phone upgrades. I put an upgrade out of my mind for a while (since, quite simply, I couldn't afford it), but today as I passed by a carphonewarehouse store I saw that they had the SonyEricsson T610 at "only" Euro 139 with a contract with Vodafone. I thought "Hey, maybe prices have gone down for some mysterious reason" and I went into the store to check things out.

Long story short, I was wrong. Prices are still outrageous. Subsequent visits to O2 and Vodafone stores confirmed that this was indeed the case.

How outrageous? Consider, as just one example, the price for the Nokia 3650. Vodafone and O2 pricing is basically Euro 300 (= 380 USD, or 210 British Pounds). Both with contract, upgrade prices are exactly the same, although in some cases they might knock off Euro 10 of the price if a) you've been a customer for more than 2 years and b) you've spent more than Euro 1000 in calls during that period. As a comparison, the prices of carphonewarehouse UK for the Nokia 3660 starts at 80 GBP, that is, Euro 100. Pricing in the US is similar, as is in all other European countries that I could find. That is, price here is basically more than double (and depending on the contract, three times as much) as that of anywhere else. SIM-free phones are similarly more expensive than in other countries. The Nokia N-Gage (which is probably the most "consumerish" device you could find, a device that, given its target, cries out for a low price) is priced at Euro 300 here and found in other countries at half the price. Older handsets, like the Nokia 6310, still sell here at Euro 200 apiece with contract.

Even more, note that when I quoted the carphonewarehouse UK price it was for the Nokia 3660, not the 3650. Why? Because they don't even sell the 3650 anymore. So while the 3650 is already being phased out in some places, in Ireland they only started selling it less than two months ago.

The point of this rant: I wish that the much-vaunted European integration would take hold in the supposedly fluid and borderless market that is consumer electronics. Even if Ireland's market is too small to sustain low prices, Europe as a whole shouldn't be, and prices would be, if not the same, at least roughly equivalent across borders. Now, that's not much to ask for is it? :-)

Categories: technology
Posted by diego on January 7, 2004 at 9:36 PM

autoclose comments on MT

I had mentioned the idea of using a script to close comments automatically in July last year and then thought about it when I started getting hit with comment spam later in October.

Now, a couple of weeks ago Jeremy posted the script he has been using for quite a while to do exactly that. Thanks! He uses mysql and I use Berkeley DB, so the script might need changing (honestly, I have no idea :)). Hopefully at some point within the next few days I'll have some time to look at it and start using it.

Categories: technology
Posted by diego on January 7, 2004 at 1:22 PM

connection back up

This morning the DSL was still acting up (ie., not working, with traceroute dying at random points on different routes) on many websites worldwide, including Eircom's own DSL support page (dsl-support.eircom.net) which is hosted here in Ireland. So I called up technical support and immediately they told me that the problem was that I had an "invalid IP" assigned. Apparently they have a number of these invalid IPs in the pool (for the PPPoE connections) which they are "trying to remove" and if you get one of those assigned for your dynamic IP a number of sites on the Internet will not be reachable. The solution is to turn off the modem and wait for a while... and if that doesn't work (as in my case, since I did that in the morning) call up tech support and they will release the IP and make sure that the new IP assigned will not be one of the invalid ones. Sounds weird that an IP in the pool might work for some sites and not others. I wonder what kind of routing boxes they are using.

Now, if you ask me, it shouldn't be so hard to write a little program that would check each IP in turn and remove those that don't work no? Why wait until customers complain?

Anyway, if you're in Ireland, use Eircom DSL and you have problems reaching some websites, this might be the reason.

Categories: technology
Posted by diego on January 7, 2004 at 12:59 PM

the wal-mart way

An inside look from Fast Company at Wal Mart and how it achieves its low-low prices. Interesting, considering the implications it has for the US economy and (both consequently and directly) for the world economy as a whole.

Categories: technology
Posted by diego on January 7, 2004 at 12:44 AM

watching jobs' macworld keynote

And on the 20th anniversary of the launch of the Mac no less...

First, It is amazing that an audience can clap, hoot and make lots of noise over someone saying "We've added a G5 to our XServe line" and things of that nature. It is kind of surreal.

Aside from XServe upgrades and such and the new versions of iLife apps (iTunes, iPhoto, iMovie, iDVD), Jobs announced GarageBand, software for making music, and what I can say is: WOW. Great synthethized instruments. Loops. Effects. Amps (!). Great Quality. Jobs' demo of this product was really, really impressive. It's not as if I can compare this to more professional tools that are out there, since I'm not aware of what the market is like (I just remember playing with CakeWalk on the PC a while ago, but it was crap compared to this), but since this is included in a $49 package, or free with new Macs, it seems to me that it's a big deal. Or at a minimum, another reason to lust after a Mac. :)

BTW, the "iLife ad" they showed after the demo was terrible, especially compared to the live demo, even more considering that many of the lines that the people in the video say had been used verbatim by Jobs before, which takes away a lot of the enjoyment of the keynote (What? You mean that the keynote was scripted?!? -- you're reminded of what you already knew but Jobs had made you forget). Ditch the pathetic ads apple, Jobs on a stage is enough. :)

Finally, the "iPod mini" (and it comes in colors!). 4 GB of memory (!!!). This is clearly not using Flash, but a microdrive, although Jobs didn't mention it, I'm sure that is the case. Very cool and not that expensive ($249) compared to flash players, after which they are going. However, the high-end flash players now are about one quarter of the size of the mini and no doubt 1/4 of weight too, and you can get around 512 MB for reasonable prices. So, I still prefer flash players but the iPod mini will probably be a good choice for many people.

Cool stuff.

Update: Matt's link roundup and comments on the announcements.

Categories: technology
Posted by diego on January 6, 2004 at 6:57 PM

spambots get smarter

Since today started as a "spam kind of day"...

Something I noticed over the last few weeks is that I've started to receive spam that is way more targeted than before. In what sense?

Well, let's say this: I'm getting spam that not only knows my full name, but also my address. Okay, not my current address, but I've already gotten spam that explictly mentions both my New York address (from 5+ years ago) and my SF Bay address (from 2+ years ago). This is bad, not only they know my email address, but they also know where I live(d)! Yes, we know that with time and money you can get a lot of information on anyone, but this has to be done automatically and massively, or otherwise it wouldn't be a practical option for spammers.

Clearly, one way this could happen is if someone (say, buy.com) has been selling their customer information. Since I usually take care of buying online only when my privacy is more or less protected, this is unlikely, though certainly possible.

There's a more likely way in which this connection was made: Google.

Google not only knows the web, it also knows other information... like phone numbers (at least in the US). Jon mentioned this some time ago.

A spambot to get "connected information" would work like this. Say you write an automated script to go through phone numbers on Google. Then the script takes the address data and the person's name, and then googles the person's name. It takes the first few results (or maybe only the first one) and scans the resulting pages to match an email's name to the person's name. Sure, this won't be 100% correct, but spammers don't care about that. And Google's reach makes it reasonable to think that you'd have a reasonably high hit rate. You could even write a program that uses the GoogleAPI for it.

Sure, we could say, as Scott McNealy does, that "you're privacy is gone, get over it". Even if you agree with that statement (and I don't, at least I want to resist it!), this is nevertheless disturbing. And the question that follows is: does Google have any responsibility for this? They'd probably say that they're providing a service by integrating yellow pages information, which would be true.

I'm not picking on Google, rather Google is the example here because of its reach and pervasiveness, but I'm sure that similar things can be done with other search engines and if not it won't be long before you can. Can we fix this at all? If so, how?

Since this is the tip of the iceberg, my main thought at the moment is that I'm a character from Lost in Space and all I hear is "Danger Will Robinson! Danger!".

Categories: personal, soft.dev, technology
Posted by diego on January 6, 2004 at 5:10 PM

yahoo goes after google (finally)

Well, well. Finally, a year after purchasing Inktomi, and six months after buying Overture it seems that Yahoo has finally digested both acquisitions and is ready to move on from using Google for its searches and begin growing on its own in the area. The Wall Street Journal reports today (subscription required) that Yahoo is now widely expected to dump Google "within a few months". Quote:

Some marketing firms, which help advertisers manage their online campaigns for search-related ads, say they have been told Yahoo will switch from Google to its own technology as early as the first quarter.
Now, I suppose we all expected that to happen at some point (I for one am surprised it didn't happen sooner) but there's more meat to it, including mentions of Yahoo preparing to leverage its various web properties in making search a more personalized affair. Quote:
Yahoo isn't discussing many of its search plans in detail. But some steps toward independence can already be seen on the shopping section of Yahoo's site, which is now using Inktomi's technology. Type in "digital camera," for example, and the site shows pictures of specific cameras along with their prices, flanked by "shopping tools" that allow users to quickly call up price comparisons, fuller specifications and user reviews. By contrast, the same search on Yahoo's front page, which still uses Google's technology, returns a familiar text-based list of links, starting with those sponsored by retailers and followed by other camera-related sites ranked by popularity.

In the future, Yahoo officials say, searches could become much more personalized. They could be tailored to return results that reflect users' past Web-surfing behavior, for example, or preferences or interests they list in a profile.

With Google now expected to go public sometime before mid-year, things are surely going to get interesting in the search space. AOL might have to do something too to keep up with the other portals (ie., MSN and Yahoo), especially as dial-up erodes as a market advantage over time. AOL-Google anyone?

Categories: technology
Posted by diego on January 6, 2004 at 1:50 PM

the internet in a cup

What do coffee houses and the Internet have in common? Today we might think of Internet cafes, but three centuries ago coffee houses where a primitive version of the cultural mechanisms that are so pervasive in the Internet today:

WHERE do you go when you want to know the latest business news, follow commodity prices, keep up with political gossip, find out what others think of a new book, or stay abreast of the latest scientific and technological developments? Today, the answer is obvious: you log on to the internet. Three centuries ago, the answer was just as easy: you went to a coffee-house. There, for the price of a cup of coffee, you could read the latest pamphlets, catch up on news and gossip, attend scientific lectures, strike business deals, or chat with like-minded people about literature or politics.
The rest in this great article from this week's Economist.

Categories: technology
Posted by diego on December 23, 2003 at 8:09 PM

sam on atom

Sam posts his thoughts and plans regarding Atom and how he will maintain RSS support for his weblog. Thanks Sam! Every bit of information helps.

Categories: technology
Posted by diego on December 5, 2003 at 1:15 PM

the atom discussion heats up again

I'm listening to Sunday Bloody Sunday Live at Slane 2001, and there's Bono saying "Compromise: Another dirty word. Compromise."

Ok, enough with the hyperbole. Here it goes...

I've been pretty busy through the day (damn, actually I just looked at the time and I should say yesterday), but just now I check and another blog-firestorm is developing. And once again, the discussion seems to be close to turning into a pile of rubble.

Deja-vu.

I've already been seeing some things that were not apparent to me when the Atom process started back in late June. And I think that Don's idea is good: given the current situation, it would be preferable if Atom adopted RSS as its feed format.

I know that many have said that Atom sacrificed backward compatibility for the sake of more flexibility in the future, but looking at the current spec I can't see clearly where is this additional flexibility obtained. I'd like to see an example of a feature that can be done with Atom but not with RSS 2.0. This would go a long way to make me (and I'm sure, others) understand more clearly why we should revise our position.

True, it is highly unlikely that RSS embedded in Atom will happen---positions seem to be too entrenched for that. Blogger will probably release soon. MT is sure to follow. But Don is right, at least we can make our views known. Of course, I contributed to this in my own small way. What can I say: my position in July might have been reasonable, but that's no longer the case. Here's why.

First, the background.

Things flared up again yesterday, when Robert pointed out that Evan had posted a link to his Atom feed, and he said "(generated by Blogger)" and nothing more. This led Robert to ask why a new syndication format was necessary, and why Microsoft shouldn't just develop its own. (This last thing was half in jest, as I understand it). This in turn created a major discussion on his entry, with lots of different participants, but very few posts by the major stakeholders in Atom. Then Don posted some thoughts and Dave put forth his opinion. I think, as I posted in the comments, that the issue was not necessarily whether the format was going to be used by Blogger or not, but rather that Blogger was not giving a context for what was happening or explained clearly what the path was (more on that below), which led to speculation and some fiery responses.

When Atom began I was for it: as I had noted the API situation in blogland was not good, and Atom pointed to a solution. I still think that a unified API would be a step forward, and I am protocol-agnostic (XML-RPC, REST, SOAP--I might have my preference but mainly I just care that everyone agrees to support it). Then it became clear that Atom would also redefine the syndication format, and I said that shouldn't be a problem (see here and here). But then, over the next couple of months, things changed.

Changed how?

  • The first thing that changed is that I noted that some people had enough time to spend in the Wiki to "out-comment" anyone else. At the same time, the format was quickly evolving, in at least one instance changing completely from one day to the next. There was no clear process to how decisions were made, and voting on different things was repeatedly delayed and in some cases (such as naming) ignored and/or set aside. I put forth my opinion more than once (example, here and here, and others, like Russ, expressed similar ideas) that someone should take charge and responsibility for the ultimate decisions made: as it stood (and as it stands), the process is opaque, and the Wiki didn't (doesn't) help matters. A mailing list got started, and though I did not subscribe to it I kept updated by reading the archives. The truth is that I didn't subscribe because (right or wrong on my part) I felt things were happening without me being able to contribute anything of value. Sam once pointed out to me through email that the spec was influenced by "running code" more than by words, but even though I was one of the first people to add Atom support to an RSS reader (as well as adding it later for other things, like the Java Google-Feeds bridge), there was no effect from that either, even if I was engaged in the discussion at the time when I was working on the code. It didn't matter one bit. This is not a question of me not "getting my way", but it's a question of civility and of giving real answers to questions, of giving real world examples instead of going off in theoretical tangents, and of giving reasons instead of saying "your idea is ridiculous" and leaving it at that.
  • Microsoft and others (e.g. AOL) are now in the game. Had Atom converged on a spec within four weeks, we might be talking about something different today. Instead, it's nearly six months after it started and the spec is still at 0.3 (although the newest "full spec" I could find is 0.2) with no clear reason of why has 0.2 been declared 0.2, what were the reasons for choosing A over B (which was supposed to be one of the pluses of the Atom process) and such. Blogger is coming out with 0.3, according to Jason (He mentioned this in the comments to Robert's entry). Robert's in-jest "threat" of "why shouldn't Microsoft do its own format" is a very real concern that anyone should have. There is a comment here that says: "When people ask "why Atom", Atom's answer should be "because we can". Microsoft could rightly say exactly the same thing.
  • One big concern of some Atom backers had was that Dave had control over the RSS spec (this is still being mentioned today) Dave disputed this all along, but right now it's irrelevant: this claim should have changed (but it didn't) when Dave gave control of it to Berkman (on which I also commented here). In fact, it's been said over and over that it was "too little, too late". But Atom feeds (note: feeds, not the API) don't provide anything that cannot be done today with RSS 2 (ie., including namespaces). If the Atom feed format is still at 0.2 or 0.3, and even that has taken 5 months to define, is that not "too little, too late" as well?
  • The Atom camp started as a genial group of people wanting to improve things. But it has turned ugly. Some of its defenders at the moment are resorting to anonymous comments that say that a) RSS is dead and b) you're either on the Atom bandwagon or you will be left behind in your little poor RSS world. Regardless of the truth of those statements, I find it worrying that constructive criticism or a clear-minded defense of a belief in a certain direction has given way to (anonymous) aggression. Instead of supporting inclusion for people like me, anyone who doesn't agree is attacked. But "evangelizing" is important as well as useful, communicating what's being done, etc, helps developers churn out better code and create better conditions for users, and it's difficult for this to happen if developers are attacked when they ask questions. This is not a recipe for friendliness from developers and users alike. Which brings me to my final point.
  • One might always find people who resort to aggression, and obviously anyone can lose it once in a while. But I think that if Blogger and MT where to clearly spell out their position, saying why they want another format, why (if) they'd like to replace RSS, and, perhaps more importantly, what is the evolution path that they have planned, then it would be easier to get above the noise. At a minimum it would be easier to, based on that position, take it or leave it. I tend to think that it's completely within a company's or an individual's right to scrap something and start again. I might not like it, but they should be able to do it and let the marketplace have a go at it. Maybe it works! But when there's an installed user base, and developers that have a stake on things, not giving clear information or plans makes, at least me, wary. By not saying anything, by not participating in the discussion a lot more than they do today, the main Atom stakeholders are allowing others to define what they mean, others that might or might not reflect their true intent. No one can speak for Blogger or MT, they have to do it for themselves. I pointed this out in the comments to Robert's entry, when Mark was giving some reasons that were fine in themselves (that is, you might not agree, but that's not the point) but they were Mark's reasons, not Blogger's. As it is, one side is asking questions but no one is replying on the other end.


I simply don't understand why, if we are building communication tools, it appears that there's a lot of talk back and forth, but no real communication is happening.

Things can get better. The question is, Will we try? Given how things are, I think that just having a reasonable conversation would be a big step forward.

Categories: soft.dev, technology
Posted by diego on December 4, 2003 at 1:28 AM

google advertises itself

Since Google was founded, I have never, ever seen an ad for it or for any of its services. That changed today, as I was reading the Wall Street Journal and came across this (partial screenshot):

google-ad.gif

Another sign of the "growing up" that has some have pointed out as necessary? Or is the word-of-mouth growth of AdSense slowing down? Regardless: it's interesting.

Categories: technology
Posted by diego on December 2, 2003 at 1:18 PM

surface and depth

In which Diego takes a philosophical look at perception and reality in our connected world...

A Japanese Karaoke party can be an unsettling experience for many people (me included--Brrr, shudder!): grown men and women, mostly in business attire, sometimes drunk to the point where speech disintegrates into babble, heartily singing to rehashed versions of popular songs re-recorded for that purpose. Behind the singer, a screen showing strange images supposedly synched to the music, and a crowd, cheering, singing along, each person waiting for their turn at the microphone.

On the surface, to a foreigner, it looks excessive, pointless, embarrassing.

The Japanese, however, know different: they understand that perceptions and surface are just that. The phenomenon of Karaoke or the exuberance of Tokyo are good examples of the Japanese attitude towards surface: something that would be considered corny, embarrassing or shameful in the US or Europe is accepted, even embraced. (In the case of "nights out" where Karaoke happens there are other factors at play, such as enabling communication that might not happen in the rigid environment of the office, but that's beside the point).

The Internet, pervasive media, fast and relatively cheap travel, the thirst of people everywhere to have access to information (regardless of the use to them, or its importance), have fueled the assumption that, since distance seems be fading in importance, culture has, too. This is pushing to new areas the limits of what is considered public, which in turn creates pressure -and fuels resistance-- for social and economic change, both locally and globally. Every weblog read, every vacation taken, every casual 5-second IM conversation, every business trip reinforces this perception both by the speed with which it is done as well as by the ease with which it happens.

Pervasive communications, media and travel make us dismiss geography and allow us to pretend that "there" is almost "here". They make us think that we can always give good solutions for other people's problems. Predictably enough, things are not that simple. To the superficial perceptions of a casual traveler two cities in different parts of the world might sometimes look pretty much the same; in reality they will always be anything but.

We've been gradually erasing the old concept of "borders", the change from a world perceived as a collection of cultures to a world perceived as a set of localized forms of capitalism. That has turned into a shift from a geographically-based state of tension (supported by ideology: two competing political, economic and social worldviews), centered around Berlin and spread throughout Europe and the world; to a purely ideologically-based state of tension: the layer of geography has been removed. Travel and communications seem to have both invalidated distance and reinforced history, creating the perception that only ideas matter. If before ideology expressed itself through geographical conflict at the borders, the fading of those borders is pushing the conflict back into the sphere of ideas, with ethereal battlegrounds like stock markets, exchange rates, corporations and the like--- and very real ones, as evidenced by the threat of terrorism, hunger and disease. Capitalism seems to have won the "war" against communism (notably, with currency and not bullets), but it's becoming clear that capitalism on its own is not the answer, simply because it tries to homogenize everything into a market, every person into a buyer and a seller, every culture into a culture of money, and many cultures are bound to resist that homogenization. Groups and organizations in powerful countries continue to assume that things that worked well in one place can work equally as well in another, which creates new problems every day, from IMF intervention in countries that are facing economic crises, to the intervention of external powers in local conflicts, such as those of Northern Ireland or the Middle East, which in turn is both reinforced and generated by the "export" of those conflicts through terrorism, war, trade, and even politics.

It is the Japanese, with their understanding of the difference between surface and depth, the ones that, in my view, have a culture that aligns better with this world of pure perceptions (and yet their cultural introspectiveness hinders them), a media-based society where careers can be destroyed by unsubstantiated rumors or wars have to be fought on television (or in the field of "public opinion") as much as in the battlefield. Or in other words: in a world where perception is treated as reality.

Language is an excellent reflection of a culture, and Japanese is one of the most context-dependent languages in the world, heavily dependent on situations to establish meaning. At the same time, Kanji is a polar opposite: ideographic, a literal representation of the concept that has to be communicated (the Kanji for "drunk", for example, literally means "9/10 Sake").

Similar trends are observable in other societies, for example the US. Maybe it's no coincidence that English as a language has over time (faster than, say, Spanish) incorporated Japanese-like features: Dependency on context, and thus on perception and appearances, and literal concept representations. Maybe it's no coincidence, if perceptions are being exacerbated in importance by the growing connectedness of the world and the subsequent growing impression that everything is the same.

"Are you flying home for Christmas?" Is an example of a sentence in English that is meaningless without the proper context--that traveling by plane is a common occurrence. In Spanish the same question would be phrased, in its literal translation: "Are you traveling home for Christmas?" Simply referring to the action (flying) would not do in general, since as we all know, humans don't fly. And while English writing has not turned ideographic and it doesn't appear to be doing so (although spoken language usually predates any change in the written language by a long time), it is full of examples where the representation of a concept in the language is literal. For example, the electrochemical device used in cars to produce a spark and ignite the explosion inside a cylinder is called just that, a spark, while Spanish has its own particular word for the device.

The idea of "politically correct" behavior has many parallels with the way Japanese society behaves on the surface. Still, many cultures today, unlike the Japanese, maintain a public pretense of morality every day and in all contexts, even though behind closed doors that is not the case, something (in)famously exposed in its most extreme example in the US with the Clinton-Lewinsky affair. Japanese businessmen read pornographic comic books on the subway, in full view. If Bill Gates did that in the US (assuming he could find a subway around Redmond!), it would be front-page news. The Japanese have contexts for things, and for businessmen a subway or a train is different than the office. Americans are always supposed to be businessmen, or pretend they are. While appearing to embrace the concept of surface as the main interface to our world, we are still reluctant to accept that context matters for judging behavior and that public and private life are not necessarily related, and, more importantly, we demand that they be.

I do think, however, that nowadays the Japanese mostly go through the motions, repeating behaviors learned long ago that have for the most part lost their meaning, and yet it is the distinction between surface and depth that underlies their behavior what will be, in my opinion, the key in the future to effectively deal with a world where it's sometimes easier to know what's happening on the West Bank than in a nearby town, or where a trip to another country can be faster than traveling to some parts of a person's own.

As geography slowly has given way to knowledge as the defining element of power and prosperity, allegiances have shifted from countries to ideas. Places have started to feel more and more alike, and many people have become less concerned about what should be done for each case and switched their interest about what should be done in general, in many cases based on the assumption that perception and underlying reality are the same. This has created a situation where there aren't many viable local proposals to achieve evidently worthy global goals.

The world, as connected as it seems on the surface, is actually, on a closer look, as divided as ever, perhaps because that illusion of closeness that is so easy to believe doesn't intrigue people to go further. Maybe if we learn to better understand and accept the difference between surface and depth we will be able to turn into a force for good the tools that technology and social evolution have given us: the key that will either unlock our understanding of each other, or open the doors to a more violent and chaotic world.

Categories: geopolitics, technology
Posted by diego on December 2, 2003 at 12:32 PM

ACM: an interview with MSN Messenger's chief architect

I got this on one of those ACM pesky subscriptions that I end up never unsubscribing from (sometimes they have something useful as well!). Read it. Thought about blogging it. Promptly forgot about it. Luckily, Slashdot reminded me today.

So, without further ado: A Conversation with Peter Ford, MSN Messenger's Chief Architect. Very, very interesting. Going over most of the main topics that relate to IM, SIP/XMPP, P2P, voice... excellent article. Particularly interesting that he ties people liking IM to the inherent "feature" of whitelisting in it, but of course past that you have the problem of bootstrapping the connection. I think that properly implemented digital identities would go a long way towards solving some of the problems he points out. I find it interesting that this solution hasn't been discussed much, although I have no doubt that it has crossed the mind of anyone who works with IM at least once.

Categories: technology
Posted by diego on November 27, 2003 at 3:01 PM

comments, comments, comments

No doubt I'm going to forget about some of the things I wanted to comment on, but here are a couple.

First, Scott in a funny-titled entry Diego, Diego, Diego or "A Conspiracy Theorist's View of WinFS" or Scott Supports Microsoft comments on my views on WinFS, Cairo, et.al.:

[...] but isn't the answer here, what it always is, $$$? I mean if you are a product manager on OS stuff at Microsoft, you're not only concerned about the current release of the OS but, just as much, the next release
Actually, when I talked about "lack of resources" in my previous posts I meant exactly what Scott is saying. They have admitted as much at least regarding security:
Microsoft is considering charging for additional security options and acknowledges that it didn't move on security until customers were ready to pay for it.
So I actually don't take a conspiracy theorist's view (although maybe Scott was saying I'm not pro or con, but "other" :))---It's all about resources (in the end, money, as Scott says). Sometimes though, money is not a factor---or does anybody doubt that MS was ready to spend whatever it was necessary to stop Netscape? With WinFS and things of that nature, there is no inherent threat to which they're responding, which theoretically means that it might not be released, as has happened many times in the past (and no, Google doesn't qualify, in my mind, for something that spurs WinFS development, although how Google affects the resources of MS's new Search effort is another matter--Pure search and "the quest for metadata" are related, but separate). On the other hand, at some point the engineers would get anxious to get this out so I think the odds are on WinFS's favor. :)

Related to this, Jon has some interesting comments to an entry by Dare on the relationship between XML, Databases, and the applicability of different tools for different problems.

Second, Patrick, regarding my "as we may think" post, said:

[Diego] says:

Two of the most influential people in the history of computing have been, without a doubt, Ted Nelson and Doug Engelbart.

Big call. They were certainly infuential in the history of the internet, but in computing? I wouldn't put them in the same ballpark as Alan Turing or Claude Shannon. Would you?

I think I didn't say what Patrick read, but just to make sure: I said "two of the most influential" not "the two most influential". Partly, the difference is because Patrick considers the Internet as separate from computing. Which is a valid view. But I think that the Internet, Networking, GUIs, etc, etc, are central to computing as we know it today (In my mind, for example, people like Bill Joy or Tim Berners-Lee were also highly influential, though for more "practical" reasons). They certainly wasn't important in 1950--they didn't exist. But the same can be said of many things, including databases, microprocessors, and even stuff that we take completely for granted, such as rasterized displays (for a while, Vector-based displays were hot stuff). All of these ideas are intertwined over time, crossing back and forth and building on each other. But of course, in the end the further back we go the shorter the list of "influential people" becomes, and the more theoretical it becomes: As Patrick mentioned Shannon and Turing, I could add a few others: Von Neumann, Shockley, Eckert, Mauchly, Wiener, McCarthy, Minsky... ok, I'll stop. That's not the point :) Patrick's comment did lead me down memory lane in other directions and made me realize that I implicitly consider computing to be broader than other (maybe most!) people.

Update: The comment I attributed to Patrick was actually from Justin (here) and Patrick was just quoting it. Thanks Justin for noting it--sorry about the misquote.

Categories: technology
Posted by diego on November 27, 2003 at 2:48 PM

rethinking the Internet, part one: "as we may think"

"The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships."

from As We May Think, Vannevar Bush, July 1945.

(First of an intended "loosely coupled" series of posts on the history of the Internet and revisiting some of its concepts and the seminal works that made it what it is today.)

Two of the most influential people in the history of computing have been, without a doubt, Ted Nelson and Doug Engelbart. Nelson, in defining with relative precision ideas that we use today (such as hyperlinks) and Engelbart leading a team that created basically everything that we use today. Nelson's ideas didn't beyond prototype stage, and Engelbart's and his team's work had to wait until they were revisited (in many cases by the same people) at Xerox PARC and through it exposed to the wider world via Apple and then Microsoft. (Which ideas? Let's see: The Mouse. Wireless Networks. LANs. Windowing systems--for starters!).

Both Nelson and Engelbart were influenced by Vannevar Bush's "As We May Think". Engelbart read Bush's article in a copy of the Atlantic Monthly he found at a Red Cross library in Manila, while waiting for his transport home after World War Two. Nelson went as far as reprinting the entire article in his first book, Literary Machines. I'll come back to Nelson's and Engelbart's in another post---For now on to Bush's article.

The last time I read "As We May Think" was about two years ago. Back then, I noted that Bush was embedded in an "analog worldview" but not much else (He was an analog kind of guy: the computer he worked on, the Rockefeller Differential Analyzer, was an analog computer that helped in calculations of artillery firing tables and radar antenna profiles--and promptly rendered obsolete with the arrival of digital computers). The ideas were great, but limited because of that. But in re-reading it this time I noted something else: Bush was describing a digital system, without digital technology.

There's a difference between a) "thinking analog" and b) being trapped by the limitations of analog technology. And Bush seems to be on the second category. This wasn't apparent to me the last time I read the article. Once you shift your view a little from "analog" to "trapped by analog" the ideas he discussed become even more astonishing. Take, for example, digital photography.

Will there be dry photography? It is already here in two forms. When Brady made his Civil War pictures, the plate had to be wet at the time of exposure. Now it has to be wet during development instead. In the future perhaps it need not be wetted at all. [There have long been films ... which form a picture without development, so that it is already there as soon as the camera has been operated ...] The process is now slow, but someone may speed it up, and it has no grain difficulties such as now keep photographic researchers busy. Often it would be advantageous to be able to snap the camera and to look at the picture immediately.
Bush goes back to this idea of "dry photography" a few times more. The problem in the article (if it can be called a problem) is that he tries to imagine the device constraining himself with the technology available at the time and in some cases an order of magnitude better in one or more axis of improvement (such as resolution of the film). But as I read it, what I saw more and more was someone saying "we need to get film out of the way. Send pictures directly into the machine's memory". In this case, the machine memory was also film (microfilm) and so it's a vicious logical circle.

But replace all the analog ideas with digital. He was talking about digital photography, albums, and massive databases that would contain them forever. We do this today without thinking twice about it. Had Bush not constrained himself by the technology available on his day, describing only function "As We May Think" would doubtlessly be considered a lot more than what it is today.

This technology constrain appears again and again. When Bush talks about compression, he is describing size-reduction on the microfilm, building on increased precision on light reflection techniques. When he talks about Voice I/O

At a recent World Fair a machine called a Voder was shown. A girl stroked its keys and it emitted recognizable speech. No human vocal chords entered into the procedure at any point; the keys simply combined some electrically produced vibrations and passed these on to a loud-speaker. In the Bell Laboratories there is the converse of this machine, called a Vocoder. The loudspeaker is replaced by a microphone, which picks up sound. Speak to it, and the corresponding keys move. This may be one element of the postulated system.
he is similarly trapped by analog technology. But all the ideas are there.

Nearing the end of the article, Bush puts together all of his concepts into his now-famous Memex device:

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, "memex" will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

In one end is the stored material. The matter of bulk is well taken care of by improved microfilm. Only a small part of the interior of the memex is devoted to storage, the rest to mechanism. Yet if the user inserted 5000 pages of material a day it would take him hundreds of years to fill the repository, so he can be profligate and enter material freely.

Most of the memex contents are purchased on microfilm ready for insertion. Books of all sorts, pictures, current periodicals, newspapers, are thus obtained and dropped into place. Business correspondence takes the same path. And there is provision for direct entry. On the top of the memex is a transparent plate. On this are placed longhand notes, photographs, memoranda, all sorts of things. When one is in place, the depression of a lever causes it to be photographed onto the next blank space in a section of the memex film, dry photography being employed.

A central point of the Memex is that it records any kind data. Text. Images. Sound. In his view, everything would be reduced to microfilms, which would then be manipulated by a mechanical system.

The only difference is that storage happens on analog devices.

Even the "hyperlinks" (a term coined by Nelson). Everything that Bush describes there is basically what we do today.

Bush describes the concept of "trails" which are constructed through links on the microfiche:

When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined. In each code space appears the code word. Out of view, but also in the code space, is inserted a set of dots for photocell viewing; and on each item these dots by their positions designate the index number of the other item.
As I read the article this time, all I could think of was "Arrgh!! You're so close! Come on, say it! Digital! Digital!"

But he didn't say it. :)

Regardless. There are two things that I took away from this re-reading.

The first is that analog technology is vastly underrated. The Memex is crearly a device that can work exactly as Bush described it, particularly with today's micro- and nano-scale technology. (most of which, paradoxically enough, has been driven basically by digital technology). Of course, I'm not saying that we should go back to analog, but we barely think about it anymore. Are we missing something because of that? One problem with analog is that unless you're both a physicist and a computer scientist you're not likely to go anywhere. Digital allows us to separate the domains more clearly. You give me the chips, I'll write some cool software for it. That distinction would be less strong (or even disappear) with analog technology.

The second, and more important is: re-evaluating old ideas is useful. Who knows what else we are missing? Even if it's quite possible that the great ideas live on, the viewpoint that a different kind of technology gives you is priceless. Sometimes there are simpler solutions to things--we just can't see them because we know too damn much. Why use a nail and a hammer when a particle accelerator will do? :) (Another problem is that it's much cooler to ask for funding for particle accelerators than for boxes of nails).

Problem is, there's too much information, and even though our retrieval and correlation capabilities have increased, the amount of information itself has grown even more, putting us back into the "days of square rigged ships" that Bush mused about. We are still some way off the Memex vision, but we're closer. When we get there, it will certainly be worth spending some time using it to wade through the historical and scientific record once more to see what viewpoints and ideas can grow out of the past and help us build new visions of the future.

Categories: technology
Posted by diego on November 24, 2003 at 1:28 PM

Cairo (and WinFS) revisited

Interestingly enough Jon Udell, whom I mentioned in my post about Cairo/Longhorn the other day, was writing up a great article with similar ideas for InfoWorld, including his own take on Cairo, Longhorn, and what the web is and how MS can affect it:

[...] the Web is much more than the browser. It's an ecosystem whose social and information structures co-evolve. Innovation bubbles up from the grassroots; integration can happen spontaneously; relationships cross borders. Cairo Version 1 wasn't designed to nourish that ecosystem or to flourish in it. Let's hope Microsoft remembers the past and avoids being condemned to repeat it with Cairo Version 2.
Agreed 100%.

In an entry on his blog he points to my own post, and talks about the history of how he took it on himself to maintain Byte's archives, and some of the problems of maintaining "digital continuity." (I've been thinking about that too--more on soon).

Then, in comments, Ole pointed to X1 which does full text search on Windows PCs as a similar idea to WinFS, Longhorn's new filesystem et. al. I have tried X1 and found it interesting, but it freaked me out by showing up all the time and being surprisingly difficult to remove. So I stopped using it. Others might have a better experience though :)

Finally, Robert replied to my post (mis-spelling my name once again--by now this has all the makings of an in-joke ;-)), saying that WinFS vanished because

I've heard that this is the fifth time that we've tried something like WinFS, but previous tries never got out of the lab. Why not?

Simple: customer testing. Whenever we come up with an idea, we have real customers go into a lab here and try out the product. If the performance, or the UI, or something else keeps it from being useful it doesn't get released.

Hm. This explanation is not complete. Suppose you show it to users. They hate it. What do you do then? Ditch it, or go back to the drawing board? Well, it depends (as I said on my post) on resources. With a company of MS's size and resources, the only thing that explains ditching WinFS is a change of priorities. Otherwise you keep at it until it works. (And, again, I refuse to entertain the notion that "it couldn't be done"--especially when others have done it!) I'll stick with this idea until I am convinced otherwise. :)

This still doesn't explain what happened to all that code either, or what, exactly, didn't work back then. Was it the speed? (Ole, btw, has been asking about that issue for a while now, and hasn't seen a good reply for it AFAIK). And that aside, the fifth time?? Really??? If that is true, it's a scary thought. The technology is definitely doable. I certainly can't believe (as I said) that Microsoft can't do it, so it sounds like there's a lot of resource-shifting going on. Weird.

Categories: technology
Posted by diego on November 23, 2003 at 4:56 PM

the web is not the browser

Yesterday Robert Scoble, reacting to comments by Dave, Jon, and others, was saying something really, really weird which involved comparing Microsoft with Dave.

Robert's comment was centered on the idea that "Dave Winer has done more to get me to move away from the Web than a huge international corporation that's supposedly focused on killing the Web."

Robert's thesis is summarized in the sentence that follows: "[...] what has gotten me to use the Web less and less lately [is] RSS 2.0."

He goes on to describe the wonderful advantages of Longhorn's components, for example: "And wait until Mozilla's and other developers start exploiting things like WinFS to give you new features that display Internet-based information in whole new ways."

Ok.

On to the debunking. :)

There are two points here. Number one, let's take Robert's thesis at face value. It is a standard Microsoft tactic to say "see? The little guy is doing it, so why can't we?". How MS can't see the difference between "the little guys" and themselves is beyond me.

But that aside, there's a bigger (much bigger) problem.

Robert: the web is not the browser.

Robert says that he's "using the web less and less" because of RSS. He's completely, 100% wrong.

RSS is not anti-web, RSS is the web at its best.

The web is a complex system, an interconnection of open protocols that run on any operating system. Robert reads RSS on a Windows client (I assume), through a protocol originally developed in Geneva and now maintained in Cambridge, MA. He reads from a variety of servers, Linux, Windows, Apache, IIS, what have you. The RSS files he reads are generated by a multitude of software systems, all of them connected through the simplicity of a few lines of XML code.

Now, if that is not "the web", then I don't know what "the web" is.

Consider the alternative: what if Robert is right? What if in 2007 or whenever Longhorn leaves the realm of promiseware developers start switching in hordes to it? Take Robert's example: of "Mozilla's and other developers start exploiting things like WinFS to give you new features" and consider: what server will that run on? Linux? Not a chance. What if developers start using XAML? What client will that run on? Macintosh? A Symbian mobile phone? Of course not. You'll need a Windows device to see it.

Longhorn is anti-web because it locks down everything back into Microsoft's control. It has nothing to do with HTML. It pushes a system where you'd be forced to using Microsoft servers and Microsoft clients.

Let me say it again. The web is not the browser. The web is protocols and formats. Presentation is almost a side-effect. And that's what people like Dave and Jon are talking about.

Categories: technology
Posted by diego on November 14, 2003 at 12:18 PM

short presentation on blogging

This is from a few days ago, and in the storm created by a number of things I forgot to link to it, but here it goes. Following my two introductory articles on weblogging, Charles published a presentation he gave some time ago, short, to the point, and geared towards a more technical audience. Nice!

Categories: technology
Posted by diego on November 8, 2003 at 5:57 PM

my feedster plugin @ mozdev

Jarno from Mycroft (a site under mozdev that deals with search plugins for Mozilla) stumbled today or late yesterday on my Feedster plugin for Mozilla Firebird and asked if I was going to submit it to the site. I hadn't thought of submitting it--I don't know why. Zoom! Went past me. Anyway, Ricky from mycroft stepped in and added it himself. The result is that if you now go to the Mycroft home page and search for "Feedster", you'll find my plugin. Nice!

Categories: technology
Posted by diego on November 4, 2003 at 4:27 PM

mozilla/netscape RSS ticker toolbar

Very cool... for those using Mozilla or Netscape (it doesn't seem to work with Firebird), you might be interested in checking out the RSS News Ticker Toolbar---an interesting use for RSS.

Categories: technology
Posted by diego on November 4, 2003 at 12:44 PM

on longhorn

Ole has posted a comprehensive comment on Microsoft's Longhorn. Great analysis, I agree with basically everything he says. The uptake?

Longhorn will certainly hurt speed. Whether it helps robustness remains to be seen; we can only hope it will, given that this is a big problem for Windows currently.
Microsoft always seems to design OSes for the next generation of technology (for whatever reason). I remember how impressed I was the first time I installed Linux on a 386. Even running X Windows worked well. I think this is something that shouldn't be underestimated. In any case, Longhorn will take a long, long time to have any impact. Most big MS customers are well-known for waiting until the first Service Pack to change to the new technology. If they release in 2006 (as they say) then it won't be until 2007 until it is reasonably deployed.

Four years. A lot can happen in that time.

Update: Scoble responds to Ole's piece. Interesting read.

Categories: soft.dev, technology
Posted by diego on November 3, 2003 at 4:52 PM

daypop, blogdex and blogosphere.us

Amazingly enough I forgot to include both Daypop, Blogdex and blogosphere.us in my introduction to weblogs. I'm making the changes now--they're useful resources. They reminded me of their existence themselves, since the intro is at the moment in the top ten of of all three. Cool. :)

Categories: technology
Posted by diego on November 3, 2003 at 4:13 PM

an introduction to weblogs, part two: syndication

The first part of this introductory guide was basically about publishing, but there is a second component to weblogs, perhaps as important, to cover, that of reading weblogs.

Note: For those that already know about weblogs, syndication, etc., I will greatly appreciate any feedback on this piece. This is a bit more technical that I would have liked, but there are some issues that, in my opinion, can't be ignored. If you have ideas on how this can be improved for end users (both technical and non-technical), pointers to other descriptions that they can go to and get a different take on this, please send them over. Thanks!

Moving on...

The need for syndication

Wait a minute (I hear you say) what do you mean reading? Don't we use the web browser for that? What would I need to know about reading weblogs?

Well, the answer is, technically, web browsing is just fine. But there is another component of weblog infrastructure that is quite important today, and that will probably become even more important as we have to deal with an ever-increasing number of information sources. This component is usually referred to as syndication. Syndication is also usually known as aggregation or news aggregation. The exact definition of the concept, or how and what it should be used for, is something that people could (and do) discuss ad infinitum, but without getting into specifics we can say that at least everyone agrees that certain things represent the idea of syndication or news aggregation quite well.

In the "hard copy" publishing world, syndication implies arrangements to republish something. Popular newspaper comic strips, for example, are usually syndicated, as are some news articles. While the meaning is similar in the web, it is primarily concerned with the technology, rather than with the contracts, or syndication agencies, etc.

Generally speaking, we could say that syndication is a process through which publishers make their content available in a form that software (as opposed to people) can read.

That is, if a site is supports syndication, and you are using appropriate software, you can subscribe to a certain site using that software. This allows updates on the site to be presented to you by the software, on your desktop (or web site that you use for that purpose) automatically.

This means that you can forget about checking certain websites for updates or news: the updates and news come to you.

Syndication is a dry, unassuming word for a powerful concept (as far as the web is concerned at least). It ties in together many ideas, and it is instrumental in sustaining the 'community' part of weblogs that I talked about in part one.

Why?

For an answer, let's go to an example. You have started your weblog, and you have been running it for a bit of time. You have found other weblogs that you enjoy reading, or that you find useful; you are also reading weblogs of friends and coworkers. Very quickly, you might be reading maybe ten or fifteen different weblogs. Additionally, you might also regularly check news sites, such as CNN or the New York Times. Suddenly, it's difficult to keep up. Bookmarks in the browser don't seem to help anymore, and you find yourself checking sites only every so often. Sometimes you miss a big piece of news that you'd liked to hear about sooner---or sometimes you find yourself wading through stuff simply because you haven't kept up. If you are a self-described 'news junkie' (as I am) you might already know about this problem, since keeping up with multiple news sources is also difficult. But with weblogs, the problem is greatly amplified: weblogs put the power of publishing on the hands of individuals, and as a result there are millions of weblogs. There are simply too many publishers. The problem of just 'keeping up' with what others are saying becomes unavoidable.

This is the problem that syndication solves. And the software that does the magic is usually called an aggregator.

Simply put, an aggregator is a piece of software designed to subscribe to sites through syndication, and automatically download updates. It does this regularly during the day, at intervals you can specify, or only when you are connected to the Internet. If the aggregator is running on your PC or other device, once you have the content you can read it in "offline" mode (unless the aggregator is web-based, which will require connectivity to the Internet at all times). For a more detailed take on what aggregators are, I recommend you read Dave's what is a news aggregator? piece. As usual in the weblog world, there is discussion about these definitions, see here, for example, for comment on Dave's piece.

A word of caution before we go on: for non-technical people, the issues surrounding syndication, aggregators and such can appear to get complicated if you start reading some of the links I provide here. There are acronyms and terms used here and there that you might see, such as "XML", "RSS", "RDF", "namespaces", and so on, that can be confusing. Let's skip that for the moment. I will (try) to go into them below (when necessary), in the section below, 'using your aggregator'.

Which aggregator?

Aggregators (or "news aggregators") come in different "shapes and sizes", and there are two main categories of aggregators on which everyone generally agrees on: 'webpage style' and 'email style' (also referred to as 'three-pane aggregators'). 'Webpage style' aggregators present new entries they have received as a webpage, in reverse chronological order (and so the end result looks very much like a weblog on the web does, but of pieces that are put together dynamically by the software). 'Email style' aggregators generally display new posts as messages (also in reverse chronological order) that you can click on and view on a separate area of the screen.

As in other cases, there are good arguments for preferring one over another, and in the end it comes down to personal choice. Reading different weblogs you might find people that are for one or for the other, and other people propose to do away with the whole thing and come up with something completely new. As with other things with weblogs: reading different opinions, and coming to your own conclusions is best. This is probably good in life in general :) but with
web-related things it becomes so easy to do that you generally end up doing it, sometimes without realizing  Look on search engines, other weblogs you like, leave comments, ask people you know, then try some of the software out. You'll find the one you prefer in no time (and, likely, as your usage changes and you have different needs, you might end up switching from one to another).

As it is the case with weblog software, all aggregators are invariably free to try, and many of them have to be purchased after a trial period (usually a month). Aggregators and weblog software are complementary, you could use both, but you could use one and not the other. It's quite possible that there are more people that use aggregators than people that have weblogs. (Certainly there are more people that read weblogs than people that write them).

If you have a weblog, chances are you also have a news aggregator already as well, because some weblog software includes news aggregation built in (as you'll see below). There are lots of news aggregators (and by lots I mean more than fifty, probably a lot more), and more on the way. Additionally, the underlying technology for syndication is simple enough that many software
developers implement and use their own aggregators.

This means that I can't possibly list all aggregators that exist here, and besides, there are other pages that do this already, such as this one, this one, this one, or this one. As it was the case with weblog directories, no listing of aggregator software is 100% complete (and probably can never be). However, I will mention a few aggregators that I know about and have tried myself, or have seen in action (and, in the case of clevercactus, that I developed :-)). (Lists in alphabetical order).

Some webpage-style aggregators

Some email-style, or 'three-pane' aggregatorsAll have one or two distinguishing features that make them unique. In the end, which one works better for you is all about personal taste and work patterns. Check out the aggregator listings I mentioned above, and look for something that grabs your attention. Try them out, and see which one you like best. Note: in many cases, aggregators are ongoing projects. Some are open source and are updated often. My advise to save time in choosing an aggregator is to go for the simplest route possible at first, and then with time try out new things. For example, if you're a paying-LiveJournal or Radio user, it will probably be better to use the built-in aggregator at first, especially as you try to find your way around all of these new concepts. If you want to try other options, or you use different weblog software, I think that a quick glance at the webpage of a product is usually a good indication of how much knowledge you need to set up something: if you don't understand what the page says, it's probably not for you. It all depends on your knowledge and how much time you want to spend on it. But, by all means: if you have the inclination or the time or both, spend some time looking at the different options. You'll be surprised at some of the cool features that some aggregators have, even if they are sometimes 'experimental'.

Using your aggregator

Once you've installed an aggregator (or decided to use the built-in aggregator of your weblog software), it's time to subscribe to some feeds.

Feeds (or newsfeeds) are usually the name given to sources of information (used by aggregators) to obtain the content they display. Feeds are technically similar web pages, like those that are displayed in a web browser. Web pages, however, are written in HTML (HyperText Markup Language) which is designed to create pages readable by humans. Feeds, on the other hand,
used for syndication, are intended to be "read" (or rather, processed) by software, and so they have different type of information, are more structured and strict in the data they can contain. Feeds are written using a language called XML (eXtensible Markup Language) using a de-facto standard "dialect" of it called RSS.

Aggregators let you 'subscribe' to to these feeds in different ways. Most pages identify the feeds as 'Syndication', or 'RSS', or 'RSS+version number'. (See the next section if you'd like to know more about these differences). Many pages have an orange icon that says "XML" like this one: . Depending on the software you are using, subscription itself can be easier or more difficult. In all cases, the following set of steps will work to subscribe to a feed:

  • Find the link on the page that says "Syndication", "Syndicate this site", "XML", "RSS", etc.
  • Right-click (or press-hold in Macintosh) over that link. Your browser will show a menu of options, and one of them will be "Copy Link Location" or "Copy Shortcut". Select that option.
  • Now go to your aggregator and find the option to Add or Subscribe to a new feed. Select it and when you are requested to type in the URL (link) of the feed, right-click (or press-hold in Macintosh) again on the field and select "Paste". This will make the URL be pasted on to the field. If right-click doesn't work, you can try with keyboard options: Ctrl+V or Shift+Insert on Windows, or Command+V on the Mac.
Now, just to be clear. These instructions are the most basic of all, and supported everywhere, and they are important so that you can use them "when all else fails". Many aggregators come built-in with a choice of feeds that you can subscribe to with only the click of a button. Many aggregators allow you to type in simply the URL for the page you are visiting (as opposed to that of the feed) and then discover the feed for you. Others also establish a "relationship" with your web browser so that when you click on the icon or link for a feed, they give you the option of automatically subscribing to that feed.

Okay, now for a bit of a detour. If you'd like to know a bit more about RSS and related technologies and have an interest in the technical background, or are technically proficient, please read the next section. Otherwise, skip to the following section.

Ready?

Okay, tell me more about RSS.

Before I start: this is a highly charged (and even emotional) issue in the weblog developer community. People have very different opinions, and this is just my take on the situation. By all means, go to different search engines and search for "history of RSS", "history of syndication", "RSS politics" and similar terms to find pointers to different sides of the argument.

Politics? Did I say "Politics"?

Yes. Yes I did.

Sometimes people mean different things when they say "RSS". Some people see it only as a way to syndicate web content. Others see it as a way to pull all sorts of information into clients. There are different opinions as to how it should be used, how it should do what it does, etc.

In the majority of cases RSS stands for "Really Simple Syndication" but you might come across other places where it is described as meaning "RDF Site Summary" (RDF, which stands for "Resource Description Framework" is yet another XML dialect, that is more flexible, but also more complex). I prefer to separate them clearly and call RSS-based feeds RSS feeds and RDF-based feeds RDF feeds, but I might be in the minority (So when I say "RSS" I mean Really Simple Syndication, not the RDF-based format). There is another syndication format being developed at the moment (also XML-based) called "Atom". Additionally, there are different versions of RSS: 0.91, 0.92 ... 2.0 (the current version of RSS is 2.0)... and RDF-based syndication is sometimes called RSS 1.0 (yes, this last one in particular is quite confusing). These are various formats for syndication. If we lived in a perfect world, we'd only have one format. But that is not the case.

I can imagine you're thinking: So, even if I know about technology, why do I care about all of this "XML mumbo-jumbo"?

Well, if you start your own weblog and begin to discover new weblogs and new feeds, and are curious about the technology, more likely than not you will read about this, about people passionately arguing about these things, mentions of RSS of this version and that, and so on. And so it's a subject that can't really be completely avoided. If you're interested in knowing more, not telling you about this would be like pretending that you can fly across the Atlantic and think that you'll never have to know about the fact that you are likely going to experience some kind of delay on departure.

But, as the Hitchhiker's Guide to the Galaxy says: Don't panic. :-)

More specifically, I'm mentioning this for two reasons:

  1. Because you, as a user (that is nevertheless aware or interested in the technology behind this), are likely to encounter this in subtle forms. For example, you might go to one news site and see that they say they provide "RSS 0.91 Feeds". Or you might see the XML orange icon shown above. Or you might see they say just "RSS", or "RDF". You will quite possibly see mention of all of these names and acronyms when you're looking at aggregator software. So, in seeing all this, you might wonder: is one better than the other? Which one should I choose? To that I'd say: start by not worrying too much. RSS is the most common format by a mile, and that is good because it's the one you'll be most likely to encounter and so you won't have to think about this much. Additionally, all aggregators support the most used formats, and many of them support all the formats in existence. In general, you don't really have to even know which of these formats is actually being used. But once you get a grasp on things, it might be a good idea to read about the history behind these differences and make up your own mind about them. (If you are a software developer, and know about XML, good starting points are Dave's RSS 2.0 political FAQ and Mark's What is RSS? and History of the RSS Fork. Beware: these discussions tend to get quite technical very quickly).
  2. Because as you're perusing weblogs or news sites, it's better to be aware of what these things mean to avoid getting confused. Once you have this bit of information, it becomes easy to look for the XML orange icon, or some link that says "RSS" or "RSS+version number" or RDF or whatever.
Again, if we lived in a perfect world only a few people would ever have to deal with this stuff. But this is relatively new technology, and we are still trying to figure out all of its uses, and in some cases, what it is exactly. The important thing is, in my opinion, that you as a technically proficient user don't feel as if things are beyond your grasp. My pet peeve is when people using say "I don't know what I did, it seems I broke something". But when (say) the washing machine stops working, we never say "I broke the washing machine" simply because we were using it. We say "the washing machine broke down". So why is that? I can think of many reasons: error messages in computers generally put the burden on the user, for a start. But regardless of that, what I would say is: if something seems complicated (like all of this "XML mumbo jumbo") it's not a problem with your knowledge of computers. It's our problem, a problem of the software developers. (If you get involved in the technology or the community in any way, then it will be your problem too :-)).

And so what? You ask. Well. Weblogs allow a new level of interaction. You can make a difference. Perhaps for the first time ever, users can actually influence and participate directly in the creation of the tools they use everyday through the tools they use every day. So if there is something that is difficult to use, something confusing, it's likely that you can find a weblog or reference for the software author(s). Post a comment. Write your own post about it. Get involved if you can, and by that I don't mean 'develop software'---simply giving opinions and ideas is a good start. People will listen, and the problem might even be fixed!

Now back from the technical depths of this section, and to simpler things.

So how do I find these 'feeds'? And how do I create them?

First of all, let's deal with feed creation. Just as your weblog software automatically generates the HTML page that is displayed in a browser (when you post an entry), most weblog software also generates the feeds for you, and places a link for the feed in your homepage. All of this is done automatically by default---if you are not sure of whether or how this is happening, check your weblog software's help page for "Syndication" or "RSS" and you should be on your way.

Finding feeds to subscribe to is not so difficult. If you're reading other weblogs or you find one of them that looks interesting and would like to keep up with what they are writing, just look for the link or icon that identifies their feed and subscribe to them "as you go". In some cases, the aggregator software will come pre-subscribed to some feeds, or will suggest new feeds to subscribe to. The methods used to find weblogs (mentioned in part one) apply to finding feeds as well, for example, both Technorati and Blogstreet (as well as other sites) allow you to find new weblogs, and hence new feeds to subscribe to. There are "Feed directories" like Syndic8, that allow you to find feeds of certain topics easily. Finally, there's Feedster which is a cool search engine that deals specifically with syndication. All the results in Feedster come from feeds, and so lets you not only look for information but also find new feeds that you'd be interested in looking at.

At the beginning you mentioned news sites. Are there 'feeds' for news sites too?

Yes. Many news sites and organizations today support news feeds. Examples: the BBC, Rolling
Stone Magazine
, and News.com. Look in your favorite news site, or use the feeds recommended by your aggregator (if any), or use some of the directories mentioned in the previous section to find more.

Why did you say that syndication is 'instrumental' for weblog communities?

I think that weblogs are cool but syndication+weblogs is really cool. It's a case of 1+1 = 4. Because syndication allows you to subscribe to many sources, you can keep up to date with a lot more and so they allow to maintain people up to date easily on what others are doing in their particular community. Things like Feedster and Technorati reinforce the "loop" that feeds create. These loops are "loosely coupled," connected through links and with people notified of updates through feeds, both done in unobtrusive ways. The conversation moves across sites, as people find the time or have the interest to do it.

Posting from your aggregator

Since a big part of weblogs is the 'conversation' that is established between different sites, it would be great if you could just re-post a piece of something you've read, or comment on it, no? Many aggregators let you do just that. For example, since Radio is both weblog software and an aggregator, you can use them 'in tandem', to post comments on things you're reading about. NetNewswire, NewsGator, FeedDemon, clevercactus (as well as others) all allow you to post to weblog software as well as reading feeds. I won't go into the details of how to do this mainly because the configuration varies from software to software, but I just wanted to mention it as something that exists, and that you might find useful as you get more comfortable with weblogs and aggregation.

Final final words

Both part one and two are an overview of concepts that (as I said) are still relatively new. As a result, things are still evolving, and new applications are being created all the time. Sometimes the technology can appear to be daunting, but there's lots of people working on making it better, and easier to use. Once you are more 'embedded' in the world of weblogs, you will start finding new uses and applications---things that were simply not possible only a few years ago---to communicate, collaborate and express yourself.

See you in the blogsphere! :-)

Categories: soft.dev, technology
Posted by diego on November 2, 2003 at 7:33 PM

an introduction to weblogs

During the last Dublin webloggers' meeting I was asked the question, "How do I start a weblog?" I began answering, somehow under the impression that it would be a simple answer. But it wasn't. As I went into more detail I realized that I was giving out more information that anyone in their right mind could digest easily. I then decided to write up this short intro so that I could use it in the future. A big part of this for me is an excercise in writing down things that might seem obvious to me (and others) but not so much to those that aren't involved in weblogs yet.

For this short intro I will assume very little: that you use the Internet regularly and that you might check news sites now and then, such as CNN or the New York Times. And that's it!

And of course, any corrections, additions and comments are most welcome. A note: this deals only with weblogs, not with newsfeeds, RSS. newsreaders, and such. Hopefully I'll get around to writing another similar introduction sometime in the near future, or to add to this one soon. :)

Update: I have posted part two of this guide here.

So, here it goes...

Intro to the intro

Before we begin...

...some terminology: there are words that you will see often with weblogs: client, server, host (or hosting). Some of these words might be familiar or not (and probably they're obvious to everyone!), but just to be 100% sure, here go some definitions as I'll use them trying to avoid taking too much liberty with their actual technical definition

  • weblog: the subject of this piece. :-) Seriously though, weblogs are often also called blogs, and some publications refer to them as "web logs"(note the space between the words). There are all sorts of blog-related terms, such as blogsphere, blogosphere, blogland, etc. 
  • content: basically, information. Content is anything that can be either produced or consumed. Web pages (HTML), images, photos, videos, are all types of content. The most common type of content in weblogs today is text and links, with images growing in popularity and audio in a slightly more experimental phase. Videos are not common, but it's possible to find examples.
  • client: a PC, or a mobile device such as a Palm or a cellphone. Clients, or client devices, allow you to create content (text, images, etc) and then move them to a serverat your leisure.
  • server:a machine that resides somewhere on the Internet that has (nearly) 100% connectivity. Servers are where the content for a weblog is published, that is, made available to the world. (We'll get later to how to or whether you even need to choose a server, in most cases this choice depends on the software used). Servers are also commonly called hosts, and the "action" of leaving information on a server is usually referred to as hosting.
  • URL: or "Uniform Resource Locator," the text that (usually) identifies a webpage and (generally) begins with "http://...". Sample URLs are: http://www.cnn.com, http://www.nytimes.com, and so on.
  • link: a hyperlink, essentially a URL embedded within a webpage. Hyperlinks are those (usually blue) pieces of text that take you to another page. Links are a crucial component of the web, but more so (if that's at all possible) with weblogs. Links are what bind weblogs together, so to speak. Through links I can discover new content, follow a discussion, and make my viewpoint on something known in an unobtrusive manner (more on this later).
  • client/server: the basic model through which weblogs are published today, and the model around which the Internet itself is largely based (This is not technically 100% true, since the original Internet was peer to peer (P2P), and we are all using P2P applications such as Kazaa these days, but let's overlook that for the purposes of this document). Clients create the content and then send (publish) it to a server. The server then makes the content available.
  • post: or posting, or entry, a single element of one or more types of content.
  • referrer: another crucial component of weblogs, referrers are automatically embedded by your web browser when you click on a link. (See more about referrers in the subsection on 'community' near the end of this page.)
  • public weblog and private weblog, are two terms I'll use to separate the two main types of weblogs that exist. Public weblogs are published on the Internet without password or any other type of "protection", available for the world to see. Private weblogs are either published on the Internet but protected (e.g., by a password) or within a company's network. Most of what I'll discuss here applies to both public and private weblogs.
  • permalink: a permalink is a "permanent link", a way to reference a certain post "forever". "Forever" here means until a) the person that created the post changes their weblogging software, or b) until their server goes down for whatever reason. If you read news sites, you'll notice that news stories generally have long, convoluted URLs, this is because every single news article ever published is uniquely identified by their URL. If you copy the name of a URL and save it in a file, and then use it again six months or six years later, it should still work. All weblog software automatically and transparently generates a permalink for each post you create, and the way in which weblogs reference each other is by using the permalink of the posts or entries.

Getting started

First of all, what is a weblog?

There are many good descriptions of what weblogs are (and aren't) scattered through the web. Meg's article what we're doing when we blog is a good starting point. One of the oldest descriptions around is Dave's history of weblogs page, and he went further in his recent essay what makes a weblog a weblog. Others interesting essays are Rebecca's weblogs: a history and perspective, and Andrew's  deep thinking about weblogs.

Not surprisingly (as you might have noticed from reading the articles/essays linked above), people are have slightly different takes about what exactly constitutes a weblog, but there is a general acceptance that the format in which content is published matters, as well as the style in which the content is created. Additionally weblogs are usually defined by what they generally are, rather than trying to provide an overarching definition.

Here's my own attempt at a short list of common characteristics of weblogs. Weblogs:

  • generally present content (posts) in reverse chronological order.
  • are usually informal, and generally personal
  • are updated regularly
  • don't involve professional editors in the process (that is, someone who is getting paid explicitly to review the content)
Beyond that, format, style and content varies greatly. I think that this is because weblogs, being as they are generally personal, that is, by or about a person, have and will have as many styles as personal styles there are.

What's the difference between weblogs and "classic" homepages? Technically, there isn't a lot of difference. The main difference is in terms of how current they are, how frequently they are updated, and the navigation format in which they are presented (weblogs have a strong component of dates attached to entries). I'd say that homepages are a subset of weblogs. You could easily create a weblog that looked like a homepage (by never updating it!) but not the other way around.

Sometimes weblogs have been billed as "the death of journalism," which I think isn't true. If there are any doubts, you can check out weblogs written by journalists, and compare that to the articles they write. They are qualitatively different. I think there will always be some room for people that make a living reporting, searching for stories, editors that correct what they write, etc. The role of news organization and journalism might change because of weblogs a bit, maybe it will become more clear and focused, but that doesn't mean it will disappear. Weblogs are a different kind of expression, period, and as such they are complementary to everything else that's already out there.

However, the best way to see what weblogs are like is to read them (as opposed to reading about them), and then try one yourself. As I've mentioned, weblogs come in different shapes and sizes. Some people tend to post long essays, some people just write short posts. Some talk about their work, or about their personal life. There are an untold number of weblogs that are simply ways for small groups to share information efficiently within their company's network, to create a "knowledge store" for projects. Some people post links that they find interesting. Some add commentary. Others only comment on other's weblog entries. Some weblogs are deeply personal. Some talk only about politics, or sports. Quite a number of them talk about technology. Some weblogs have huge number of readers. Others only a few dozen. Even others are completely personal and are only read by the person who writes them. Some public weblogs (relatively few) are anonymous, most identify the person. Some are updated many times a day, others once a day, others a few times a week.

You get the idea. :-)

So, some good examples of well-known weblogs (at least within their communities) to read and get an idea of what they're about. Check them out, read them and about the people that create them (alphabetically). Anil, Atrios, Betsy, Burningbird, Dan, Dave, Doc, Esther, Erik, Evan, Glenn,Gnome-Girl, Jason, Jon, Joi, Karlin, Halley, Mark, Meg, Rageboy, Russ. All of these weblogs are, in my opinion, great examples of weblogging in general. You may or may not agree with what they say, you may or may not care, but they are all a good starting point to show what weblogs are and what they make possible.

Those that are more embedded in the weblogging community that might object to presenting such a small list to represent anything, or might put forward different names, so I just want to say: Yes, I agree. But to show different styles of weblogging, and provide some initial pointers, we have to start somewhere. I'll go further on the subject of discovering weblogs below, in the subsection about community.

This all sounds intriguing, but will I like it?

That's a difficult question. :) I guess my answer would be "try it to see if it fits". As mentioned below, weblog software is invariably free to try (at least) and so there is no cost in getting started. My opinion is that some people are more attuned to the concept than others, because they are already sort of weblogging even if they don't describe it as such. For example, if you like to rant about anything, if you keep pestering your friends, family and coworkers about different things that you've seen or read or thought about, or if you regularly launch into diatribes about all and any kinds of topics (e.g. "The emerging threat to African Anthills and their effect on the landscape") then you might be a Natural Born Blogger. :-)

So, again, just try it out. If it doesn't work out, no harm done. It's certainly not for everyone. But you just might discover a cool way of expression and create a new channel to communicate with the people you know, and a way to find new friends and for other people to find about you.

Okay, I'm sold. How do I get started?

First step in starting a weblog is choosing the software you will use. There are many products available.

But before going into them, there are two main categories of software to choose. I'd ask: how much do you know about software, or how much do you want to know? Do you run or maintain your own server? Are you interested in running a private, rather than public, weblog, for say your workgroup, and you don't want to worry (too much) about passwords and such, and can handle yourself technically?

If the answer to any of the questions above is yes, skip this next item and go directly to 'Self-managed weblog software' below. Otherwise, you'll probably be better off with 'End-user software'.

End-user weblog software

Here are some of the most popular products (in alphabetical order). All of them have been around for several years and have been important drivers of the weblog phenomenon (except for TypePad, that launched in mid-2003 but is based on MovableType, which is another popular tool "from the old days"--see below).

  • Blogger. A fully hosted service, Blogger lets you post and manage your weblog completely from within your web browser. Blogger is now part of Google (yes, the search engine). A good starting point for blogger use is blogger's own help page.
  • LiveJournal. A hosted service, like blogger, with a long-time emphasis on community features. For help on live journal, check out LiveJournal's FAQ page.
  • Radio Userland. Radio runs a client as well as a server in your PC and lets you look at your content locally through your web browser. To publish information, Radio sends the content to Userland's public servers. Radio's homepage contains a good amount of information and links to get started, and a more step-by-step introduction to Radio can be found in this article.
  • TypePad. Fully hosted service. An end-user version of MovableType (see below) with more capabilities (in some cases) and some nice community features. To get started, check out the TypePad FAQ.
So, which one of these should I choose?

Short answer: it depends.

Long answer: it depends. :) That is, it depends on which model you prefer. Blogger is free, LiveJournal has a free and a paid version. Radio and TypePad are not free but offer trial versions. Blogger, LiveJournal and TypePad are fully hosted, while Radio keeps a copy of your content on your PC as well as hosting your content on a public server. All of them are free to try, so looking around for which one you find best is not a bad idea. :)

Self-managed weblog software

Here are some of the most popular products (again, in alphabetical order)

All of these products involve some sort of setup and, at a minimum, some knowledge of Internet servers and such. (All the links from the list contain information on installation and setup). If you have set up anything Internet-related at all in the past (say, Apache or IIS), you should be able to install and configure these products without too much of a problem. (If you don't know what IIS or Apache is you should probably be looking at the previous section, 'end-user software').

Beyond the first post

Are there any rules to posting?

Generally, weblogs being what they are, the answer is no. But there are some things that I personally consider good practice that I could mention:

  • Links are good for you. Always link back to whatever it is you're talking about, if possible. A hugely important component of weblogs is the context in which something is said, and links provide a big part of that context.
  • The back button rules: Never repost a full entry from another person without their permission. "Reposting" implies to take someone's text and include it in your own entry. Usually this is done to comment on it, but I think it's better to send people to whatever it is you're talking about, with quotes when necessary to add specific comments, rather than reposting everything. All web browsers have "back" buttons; once someone's read what you're talking about they can always go back and continue reading your take.
  • Quote thy quotes: Quotes of another person's (or organization's) content should always be clearly marked.
  • Thou shalt not steal. Never, ever, ever, repost a full entry that someone else wrote without at the very minimum providing proper reference to the person who wrote it. Even then, try to get permission from the author. See 'the back button rules' above.
There is another question that usually starts up discussion in the weblogging community, the subject of editing. As I mentioned above, weblogs in general are self-edited, but even if they are, how much self-editing is appropriate? Again, it depends on your personal style. Some bloggers don't edit at all and just post whatever comes to their mind. Some write, post, and then edit what they posted. Others do self-editing before posting and publish something only when they're happy with it. You should choose the style you're comfortable with.

What about comments to my posts? And what's this 'Trackback' I keep hearing about?

Weblog software usually allows you to activate (or comes by default with) the ability for readers to leave comments to your posts. This is generally useful but you might not want to do it. As usual, it's up to you.

Trackback is something that allows someone who has linked to you to announce explicitly that they have done so, thus avoiding you (and others) having to wade through referrers to find out who is linking to you, and providing more context for the conversation. Some weblogging systems (e.g., TypePad, Radio, MovableType, Manila) support Trackback, but some don't (e.g., Blogger, LiveJournal). Once you have become familiar with weblogs, Trackback is definitely something that you should take a look at to see if you might be interested in using it. Here's a beginner's guide to Trackback from Six Apart, the company behind MovableType and TypePad (that created the Trackback protocol), as well as a good page that explains in detail how Trackback works.

These mechanisms are useful more for the community aspects of weblogs than anything else, and usage of them varies widely from weblog to weblog.

And what about all this 'community' stuff?

Because weblogs are inherently a decentralized medium (that is, there is no single central point of control, or one around which they organize), it's much harder to account for the communities they create and to track their usage. (For example, the actual number of weblogs worldwide is estimated at the moment to be anywhere between 2 and 5 million. Not very precise!). But there are ways to find new weblogs, and here are a few of my favorites.

Update directories

There are sites like weblogs.com and blo.gs as well as others that are usually notified automatically by weblog software when a new entry is posted. Because of that they are a good way of (randomly) finding new weblogs.

Blog directories

What? This sounds a lot like "the central point" I just said didn't exist. Well, it does and it doesn't. There are directories, but they are not 100% complete because they rely on automatically finding new weblogs (for example, through weblogs.com updates and other means) or through people registering their weblogs with them, and both methods are fallible. Two examples of this are Technorati and Blogstreet. When you go to those sites you'll notice they talk about "ecosystems" to refer to weblogging communities, and that's a pretty accurate word for what they are. Those sites, as others that perform different but related functions (such as Blogshares, or BlogTree), also let you explore communities around your weblog, discover new weblogs, etc. Daypop, Blogdex and blogosphere.us focus a bit more on tracking "trends" within the weblog community (particularly Blogdex). Technorati and Blogstreet do this as well.

Search engines

A lot (and I mean a lot) of result for search engine queries these days lead to weblog entries on or related to the topic you're looking for. Chances are, those weblogs contain other stuff that you'll find interesting as well. Some good search engines are Google, Teoma, and AllTheWeb.

Targeted directory/community sites

There are sites that center around a particular topic and put together a number of weblogs that are devoted to or usually talk about that topic. For example, Javablogs is a weblog directory for weblogs that have to do with the Java programming language.

Referrers

Referrers are a mechanism that exists since the early days of the web, but that have acquired new meaning with weblogs. The mechanism is as follows: if you click on a link on a page, the server that is hosting the page you are going to will record both the "hit" on that page, as well as the source for the link. Those statistics are generally analyzed frequently (e.g., once every ten minutes, once every hour) and displayed on a page for your perusal. So if someone posts a link to your weblog and people start clicking on that link to read what you've said (and depending on the weblog software you're running) you will be able to see not only how many people are reaching you through that link, but also who has linked to you, which then helps you discover new opinions, people that have similar interests, etc. Directories like Technorati also track who is linking to your site, and so serve a similar function (but, again, as they are not 100% accurate you might not get the "full picture" just from looking at them).

Other options

There are many. :) The best additional example I can think of is some of the community features of LiveJournal and TypePad, which allow you to create groups of friends with whom you prefer to share what you write, etc.

Is blogging dangerous?

Yes. Most definitely. And addictive, too. :-)

Seriously though, while blogging might not be literally dangerous, it is most definitely not free of consequences. We sometimes have a tendency to take ourselves too seriously, or to misinterpret, or to rush to judgement (I wrote about these and other things in rethoric, semantics, and Microsoft). Some people have been fired from their jobs because of their weblogs. Others have lost friends, made enemies, and gotten into huge fights (mostly wars of words, but that nevertheless have impact on both online and offline life). On the bright side, weblogs have been at the core of a large number of positive developments in recent years, mostly technical but also other kinds, have provided comfort and even news when everything else seemed to be collapsing both in large scale (for example, the Sept. 11 terrorist attacks in the US) and for individuals and small communities. People have made scores of new friends, gotten job offerings, and started companies through them.

The number one reason for this is that, contrary to what you might think (and unless you're writing for yourself and not publishing anything anywhere), people will read what you write. It might be a few people. It might be many. It might be your family, your friends, boss, or your company's CEO, or a customer. (Robert Scoble, who works at Microsoft, posted some thoughts on this topic today, here and here). This is easier to see with a private weblog, but I'm always surprised at how easy it is for me to forget that it happens (of course) with public weblogs, all the time.

My opinion is that in weblogs, as in life, whenever you expose part of yourself in any way, whenever you engage in a community, whenever you express yourself, these things tend to happen. :-)

Final words

You might have noticed that there are a lot of "do what you think is best" comments interespersed with the text above. This is not a coincidence. Blogs are, above all, expression. Blogs and the web in general allow us to look at many viewpoints easily, cross-reference them, etc. Check things out. Look for second, third, fourth, and n-th opinions (and this definitely includes the contents of this guide!).

You have the power!, or in other words: It's up to you.


Read more in an introduction of weblogs, part two: syndication.

Categories: art.media, soft.dev, technology
Posted by diego on October 31, 2003 at 10:34 PM

ms-google? probably not, but...

Back at the end of August I wrote, on google and the markets:

I was thinking today: how much more possible is it that Google will be acquired? .... No, not AOL--not enough cash, too many problems. But Microsoft? I admit, it's far fetched, highly unlikely, etcetera. But they could make "an offer they couldn't refuse". It's left as an excercise for the reader to figure out, in this hypothetical scenario, who would be Don Corleone, and who would be Luca Brasi.
Okay, now check out this article in today's New York Times:
Google, the highflying Silicon Valley Web search company, recently began holding meetings with bankers in preparation for its highly anticipated initial public offering as it was still engaged in meetings of another kind: exploring a partnership or even a merger with Microsoft.

So apparently Microsoft was thinking something similar, around that time. Sure, it's still farfetched. But never say never...

Heh.

PS: And yesterday night I watched The Godfather... :)

PS2: The Economist has an interesting article this week: How good is Google?.

Categories: technology
Posted by diego on October 31, 2003 at 8:48 AM

hate microsoft? like working for free? we've got a job for you.

REDMOND, WA -- Microsoft Corp. unveiled a new strategy today designed to off-load development of its products to the very same people that hate them. Under the program, self-described Microsoft haters that subscribe to Microsoft's MSDN program at the low cost of between 500 and 2000 USD, will be able to download the latest build of Longhorn, Microsoft's next-generation operating system. After spending untold hours setting the system up, those users will be able to write up and even publish their ideas and criticism on their own weblog, or public forums or talk about them with friends and family. More significantly, Microsoft vowed to actually pay attention to some of the feedback. Robert Scoble described this unprecedented move of allowing people to talk about things as follows: "Why is this a massive change? Everytime we've released a version of Windows before we kept it secret. We made anyone who saw it sign an NDA (non-disclosure agreement). Even many of those of you who signed NDAs weren't really given full access to the development teams and often if you were, it was too late to really help improve the product." Microsoft noted that they hoped that these new hate-filled testers would prove more effective than the estimated 50,000 internal and 20,000 external testers that had given feedback on previous versions, going as far back as Windows 2000. "Honestly," said one Microsoft executive who wished to remain anonymous, "All those guys must have been asleep at the wheel. I mean, look at the stuff we've released in the last three, four years. Nothing works. We've had so many viruses and worms that we've got calls from WHO offering to send out a team to help."

In his posting, Scoble added "The problem is, there are two types of people: 1) Those who hate Microsoft. 2) Those who hate Microsoft but want to see it improve."

When asked if Scoble had grossly over-simplified the situation by assuming that everyone in the planet was into either of those two categories, a Microsoft representative said "Not at all. We did a lot of research on this. People care about three things: food, whether Ben and J.Lo will get married, and hating Microsoft." The representative added that there is always a margin of error. "The survey was world-wide, so there were flukes. For example, some respondents from a small town north of London put someone or something called Robbie Williams, or Williamson instead of Ben & J.Lo, but we don't know who or what that is. We presume it's codename for a Linux Kernel build."

And what about people that say they don't hate Microsoft, but would simply, only, like to see it play fair in the market and stop leveraging one monopoly to get to the next? What about people that say that Windows is fine and that the only problem is with how Microsoft attacks competitors with a lot less resources? "Nonsense. Those people are just confused, or need to get off the glue. Just like those losers in the middle of Africa or whatever. Like, people that say they can't afford computers, or are worried about wars, famine, terrorism, AIDS, whatever. They watch too much TV and they get ideas."

"There is no bigger deal in the world right now aside from Longhorn, and people, all people, understand that. They want to improve their lives and testing Windows for us for free is the way to go." The representative went on to note that their research had shown that "people" were "tired of dealing with bugs" in the "old" versions of Windows. "This is all about giving customers what they want, and guess what, they want new bugs, too. They're tried OS X, for example, but it just works, and they have go back to Windows." Many people said they "missed the thrill" of dealing with the possibility of losing a day's work in a crash, while others loved rebooting, because "it allows them to go get coffee regularly, or a sandwich, things of that nature, which is not surprising since the survey also found that food is somehow important to people."

Offloading the design and testing process has other benefits too. "It's also the blame factor," the executive added. "Imagine. Longhorn is released and it doesn't work very well. All those Microsoft haters--I mean, they are the ones who signed off on it in the first place, right? How are they going to criticize it then? It would be their fault, right?"

Would the hatemongers be rewarded in some way? "Hell no." The executive said. "With Windows XP we actually charged people to get the beta, and it worked like a charm. Although it has been suggested that we try out BillG's ham sandwich bundling theory, we probably won't do---Ham has too much fat. It's just not healthy."

Although all companies appreciate and use feedback from users and developers, Industry commentators noted that much smaller companies such as Apple or QNX, as well as the group of developers that work on Linux, have been able to develop OS products without formally off-loading design and early testing tasks to the general development public for free. But when asked why Microsoft, who holds USD 50 billion in cash and short-term securities as well as two of the most profitable monopolies in history, can't deploy resources to develop the product on its own, the executive explained. "Well, the truth is that a large part of that 50 billion is going to be used for our new project, a Microsoft theme park. Bill wants to buy Seattle, including the Boeing factories to the north, and turn it all into an amusement park. You'll have all the classics: the SteveB roller coaster, the Blaster Worm House of Horrors, and the DOJ shooting range."

Finally, he hinted "And watch out. ClipIt will be a favorite character."

[Scoble's "How to hate Microsoft" originally via Dave]

Categories: soft.dev, technology
Posted by diego on October 23, 2003 at 5:05 PM

the last flight of concorde

concorde.png
Later today the last scheduled Concorde flight is due to take off from London-Heathrow to New York-JFK, returning tomorrow. There's a good article about it in last week's Economist titled "the less beautiful future of international business travel" and I agree. Too expensive? Sure, completely out of my bugdet (and most of the planet's too :)). But so what? Concorde is a fascinating piece of engineering and its beautifully designed. Sometimes I find it good to know that certain things exist. They provide a yardstick: "Here's what we're doing now, you go further". We seem to be losing a lot of those these days. (I'm repeating myself, I know).

Example: it's sad to see that the future of supersonic travel seems to be completely on hold. Boeing is developing a new jet, but not much of a departure from, say, the 737 (7E7 is its name, the E supposedly stands for "efficient") and Airbus is going forward with the (to me) horribly stupid idea of creating a 555-seat airplane, the A380 (check out their A380 site. If how bad that site is any indication of how crappy travel will be in that airplane, we'll be better off with the cattle on a barge). I mean, come on. 555 seats? Can you imagine how long it will take to load and unload that plane? Air travel is bad enough already. At least Boeing is coming out with a smaller (250-seats), more efficient plane with the 7E7.

Oh, well. Farewell, Concorde.

Categories: technology
Posted by diego on October 23, 2003 at 12:22 PM

micro-everything

According to Sun's CTO, microprocessors are on their way out. Quote:

"Microprocessors are dead," Papadopoulos said, trying to provoke an audience of chip aficionados at the Microprocessor Forum here. As new chip manufacturing techniques converge with new realities about the software jobs that computers handle, central microprocessors will gradually assume almost all the functions currently handled by an army of supporting chips, he said.

Eventually, Papadopoulos predicted, almost an entire computer will exist on a single chip--not a microprocessor but a "microsystem." Each microsystem will have three connections: to memory, to other microsystems and to the network, Papadopoulos said.

He predicted that as more and more circuitry can be packed onto a chip, not just a single system but an entire network of systems will make its way onto a lone piece of silicon. He dubbed the concept "micronetworks."

This trend is certainly visible in low-end systems, where chips (particularly for notebooks) come built-in with video, sound, network, and other features built-in. In that sense, I don't see what's there to "predict". The idea of a system-on-a-chip is already here. Sure, he's probably talking about even more integration, which is reasonable since he's from Sun Mic... well, you get the idea.

I think that it's a little premature to assume that individual components for devices and boards will disappear outright, for a simple reason: when you have different components created by different manufacturers, competition can be more partitioned among different areas of a system, resulting in better quality overall. It's not a coincidence that PCs, which are essentially a bunch of chips from many, many different sources brought together, are cheaper and faster today than anything else in their category. Hey, even Apple switched over to PCI eventually.

Speaking of Sun, there's an interesting article in today's Wall Street Journal (subscription required) on Sun's new strategy. The article's title ("Cloud Over Sun Microsystems: Plummeting Computer Prices") makes it sound as if it's going to be less harsh than it is. The writer all but declares Sun dead as it is customary these days--as an example, consider the quote: "The Silicon Valley legend that once boasted of putting the dot in dot-com is staring into the abyss". Staring into the abyss. Jeez. And then the writer adds snippets like Sun is sitting on $5.7 billion in cash and securities. Heh. I'd have no problems "staring into the abyss" under those conditions. :)

The truth is that the picture that emerges from the information, the quotes from customers and Sun execs, etc, is less clear. I think there's simply confusion on the part of "analysis" that see everything as either-this-or-that on Sun's new ideas, which are a relatively new breed (and quite a gamble, one might add). It's going to take one or two more years to see if Sun is really going to get through this or not.

One of the problems that the article focuses on is Sun's chaotic behavior during the past two years, as the new strategy was developed and put in place:

In December 2002, Jim Melvin, chief executive of Siva Corp., a Delray Beach, Fla., firm that runs back-office systems for restaurants, wanted to invest several million dollars to build a corporate data center using Sun equipment. But when Mr. Melvin approached Sun, he found the company's restructuring was causing chaos. "Sun said call back in two months, because the guy I was talking to there didn't know if he'd still have his job," he says. Frustrated, Mr. Melvin bought IBM and Dell gear instead.
and which I saw for myself here and there. Sun also made the mistake of being a dot-com baby itself, instead of just selling stuff to dot-coms (as IBM did):
Sun compounded its problems by responding slowly to the slumping market. Even as tech spending dried up in 2001, the company increased its work force to 44,000 employees from 37,000. Other firms axed costs early, or launched big deals to remake themselves. Cisco Systems Inc. cut nearly 20% of its work force beginning in March 2001. H-P launched a controversial purchase of Compaq Computer Corp. that same year, and has since slashed $3.5 billion in annual costs.

Sun put the brakes on the hiring in the fall of 2001 and trimmed around 10% of its work force that October -- the first of several big cuts. But by mid-2001, Sun's quarterly sales had dipped to $4 billion and it began reporting net losses. Its share of world-wide server revenue fell to 13.2% in 2001 from 17% in late 2000, according to Gartner.

The new strategy does makes sense. As McNealy is quoted as saying in the article, "we're long on strategy. If we execute well, we'll do just fine."

Exactly.

Categories: soft.dev, technology
Posted by diego on October 16, 2003 at 9:16 PM

O2's and Vodafone's "idiot upgrade package"

"...nk you for calling O2 Ireland. Your call is important to us. Please stay on the line and a customer care representative will be with you shortly."

I was walking through Dublin this afternoon and I saw that finally O2 and Vodafone released the Nokia 3650 here (Not bad eh? Only 9 months after the rest of the world). Hey! I think. Cool.

I walk into the store. "Hi", I say to the excessively smiling employee, "I am an O2 customer and I was wondering what was the price to upgrade my current phone, which I have with a contract with O2, to a Nokia 3650."

"Allright sir, what's your phone number?"

So I give her my details and she checks the account. "Sorry. But you are not eligible for an upgrade until December."

"December? But I have this phone since November 2001."

"Exactly. You have to be on a contract more than two years to be eligible for an upgrade."

This was already going badly. Two years? What the hell? It wasn't as if my current phone was heavily subsidized or anything, I had paid nearly list-price to get it with a contract back then (almost 200 Euro in 2001).

Anyway, I say, "Okay, so what will be the price for the upgrade?"

At this point, we must keep two things in mind: 1) The SIM-less phone, new, off Amazon or a store a block away from where I'm standing, costs 500 Euro. And, 2) If you get a new contract with O2, with the Nokia 3650, the up-front price is Euro 230.

Okay. Ready for the reply?

"The upgrade will cost you Euro 329."

"I'm sorry?" I say.

"329 Euro for the upgrade."

I turn around and point at their display with phones and prices. "But over there it says that getting a new phone costs 230."

"That's correct."

(Whenever they get so formal in their speech you can almost hear the echo "Right, you moron, so what's new.")

So I say: "That doesn't make sense. I've been an O2 customer for 2 years. On a billing plan. Always paid in time. And when I want to upgrade, you want to charge me more than anyone else? Why would I do that?"

She says, "Well, you get to keep your number."

Right. That's it? I say, "Well, with number portability I can move over to Vodafone for less than that."

She looked confused for a moment. Then she blurted out. "Well, but Vodafone doesn't have many phones."

LOL. At this point I just said, "Okay, thanks," turned around, and left.

Off to Vodafone. Over there another friendly employee told me that moving over from O2 would cost me Euro 270 plus 30 a month (which is what I"m paying with O2 anyway). So the price is still outrageous, but I save 60 Euro that way. (Btw, I asked at Vodafone for GPRS data rates. The answer? "Cheap. Cheap." How cheap? "Oh, only 20 cents per kilobyte" I don't need to explain my reaction).

When I get home, I call O2 "Customer care" which after several minutes of the friendly message quoted at the beginning and a number of menus, gets me to a real person. We go through the same routine. She patiently explains that that's the price. Yes. Correct. 330. Yes. So when I ask, why would I stay with O2 then? She says "Well, you get to keep your number."

Again, that ridiculous answer.

I pressed with the question. "I'm sorry, but that doesn't make any sense. I can switch over to Vodafone and because of number portability I can keep exactly the same number." The new answer had more information. "You see," she said "When you want to upgrade with Vodafone they'll charge more too."

So I said, "Sure. Maybe. But right now I save money by switching. And if they want to pull that later, I'll switch over back to O2 again and you'll be friendly enough to charge me less."

Silence. "Well, yes. That's your choice."

"Okay," I said, "I just thought I was wrong, but I realize that this doesn't make any sense at all. Thanks."

And there it was.

In essence, both O2 and Vodafone have what amounts to the "idiot upgrade feature" which probably reads in some internal memo as follows: "If customers have been stupid enough to pay for our overpriced services for two years, surely they will be stupid enough to pay for an upgrade at a price higher than a new phone as well."

They're so blind, so engrossed by their almost illegally stratospheric profit margins that they can't see that in a couple of years people will be using IP-based systems for everything. If they don't bring down GPRS pricing, then we'll use the WiFi chip in whatever device is easiest (and phones with GRPS+Wifi are not far behind--some handhelds already do it). Their precious "you'll lose your phone number" will be worthless. Already email addresses, IM nicknames and such are on equal footing with phone numbers as far as "personal IDs" are concerned. The importance of the all-digital IDs will only increase as more and more IP-based services go mobile.

Conclusion: they prefer people to alternate between them every two years even if that costs more money for them (signing a new customer, after all, is more expensive for them than just changing the phone on an existing customer). I'll certainly go that way. And eventually, they won't matter anymore. And in a few years, they will be pulling their hair out wondering why their customers don't seem to care about phone numbers any more.

But then it will be too late.

Categories: technology
Posted by diego on October 15, 2003 at 6:30 PM

hierarchy on OPML subscription lists

Dave is asking about hierarchy on OPML subscription lists and what do aggregator developers think. Today, clevercactus beta3 (not yet released publicly) supports (both for generating and reading) hierarchy in OPML as follows:

  • If an outline elements only has a title, then it is assumed to contain other outline elements inside and should be closed with a /outline tag.
  • If not (that is, if URL information is included), then the outline is self-contained and assumed to be a link within the current position in the hierarchy.
This, by the way, is the same as what Newzcrawler does, so that would make two aggregators that work in the same way. Not sure about the others though.

Categories: technology
Posted by diego on October 14, 2003 at 2:12 PM

a weblog of post-its

Now, I'm not a Radio user but this theme for Radio that Cristian has just released makes me wish for a moment that I was just to try it. Looks really cool. I probably wouldn't use it directly, but there are many elements in it that I find intriguing. Ahh if only I had some time to do a full set of CSS options for d2r...

Categories: technology
Posted by diego on October 13, 2003 at 1:51 PM

so that's why they find it so easy to hack Windows?

More on Microsoft and its security problems: a News.com interview with Bob Muglia, a top exec at MS. What I find interesting about this interview is a) that Muglia is giving the same non-answer answers as others do, most notably Ballmer, and b) that the reporter doesn't let up. At all. Watch out near the end for when Muglia starts saying that hackers are criminals, and the reporter asks "what does that have to do with anything?". :-) Exactly, since they were talking about MS's problems, not about whether it was a crime or not, and whether they are criminals or not has nothing to do with the security problems in their software. Highly recommended.

Categories: technology
Posted by diego on October 13, 2003 at 12:30 AM

zeroconf/rendezvous

Matt has a cool list of links on rendezvous, Apple's version of Zeroconf, for dynamic DNS binding. Nice!

Categories: technology
Posted by diego on October 11, 2003 at 2:55 PM

psion-ce

Again, something that I missed: the specs (or is it the product itself?) for the long-rumored Psion/Windows CE device have been published (slashdot thread here). The name: NetBook Pro. As the happy owner of a Psion Series 7 (essentially the original netBook, but with half the memory and some ROM differences) I can't wait. Yes, it would be great if they were using Symbian. No, I don't care that they are not (as a consumer, that is--as a developer... well, that's another matter). The S7 is just fantastic. Mine goes without needing a recharge for weeks and weeks. Instant on. Perfect size and weight (although if the keyboard was a bit bigger I wouldn't complain). Pretty much my only problem with it is that the resolution is terrible and the applications aren't directly compatible with anything (although you can do conversions through the sync, that's not quite the same). And Psion would do well not to make the same mistake regarding sales channels that they made with the netBook--there was no way of buying it through Amazon or whatever, only through resellers and generally in quantity (it was a corporate product) which forced everyone to get the less-powerful S7. Something else to watch out for.

Categories: technology
Posted by diego on October 11, 2003 at 11:33 AM

the memory-overflow trial?

Totally missed this when it happened, and I've seen few comments on it on the blogsphere. I was reading this CNN/Money article on how Microsoft plans to "overhaul" Windows to "fight hackers" (News.com coverage here) and I noticed a link on the sidebar... it turns out that on Oct. 2, Microsoft was sued in California (coverage from News.com, Infoworld and CNN here, here and here respectively--and predictably enough there was a slashdot thread on it). Whether the lawsuit will actually go anywhere is anyone's guess-- and there are other potential problems with this approach, that I mentioned here. I do find it interesting that this didn't generate more of a widespread response. Should be interesting to watch out for developments on this.

And, regarding Microsoft's announcement--it apparently has to do with Microsoft's long-rumored Palladium initiative. The announcement is high on words and low on substance. The bad side of marketing...

Categories: technology
Posted by diego on October 11, 2003 at 11:07 AM

forget the typewriter, fire up those modems

Quite strange news from News.com's Declan McCullagh:

The FBI is convinced that I'm an Internet service provider.

It's no joke. A letter the FBI sent on Sept. 19 ordered me to "preserve all records and other evidence" relating to my interviews of Adrian Lamo, the so-called homeless hacker, who's facing two criminal charges related to an alleged intrusion into The New York Times' computers.

[...]

Leadbetter needs to be thwacked with a legal clue stick. The law he's talking about applies only to Internet service providers, not reporters. Section 2703(f) says in its entirety: "A provider of wire or electronic communication services or a remote computing service, upon the request of a governmental entity, shall take all necessary steps to preserve records and other evidence in its possession pending the issuance of a court order or other process."

Last I checked, electronically filing this column to my editors does not make me a provider of "electronic communication services." Nor does tapping text messages into my cell phone transform me into a "remote computing service," as much as I may feel like one sometimes.

Categories: technology
Posted by diego on October 10, 2003 at 2:00 PM

a couple of ms-related news items

I had seen this when it came out but since I was in no-blog-mindset I didn't note it. Then today I saw that Grant had linked to it and ...

I'm talking about this News.com article on Microsoft's newly obtained patent on IM. Quote:

Microsoft has won a patent for an instant messaging feature that notifies users when the person they are communicating with is typing a message.

The patent encompasses a feature that's not only on Microsoft's IM products but also on those of its rivals America Online and Yahoo. The patent was granted on Tuesday.

Isn't it weird that someone at the patent office would think that something like this is a non-obvious, never-done-before invention? Have these people actually used computers before? The article also mentions AOL/ICQ's patent, on which I wrote last year, in particular in this entry where I did a deeper search for prior art on that patent. It was interesting to revisit that in the context of this new patent as well.

Another MS news item that I found interesting was this one:

Web developers want to light a fire under Microsoft to get better standards support in the company's Internet Explorer browser, but they can't seem to spark a flame.

Gripes have mounted recently over support in IE 6 for Cascading Style Sheets (CSS), a Web standard increasingly important to design professionals. Web developers and makers of Web authoring tools say the software giant has allowed CSS bugs to linger for years, undermining technology that promises to significantly cut corporate Web site design costs.

Seeking to goad Microsoft into action, digital document giant Adobe Systems last week unveiled a deal to bolster support for CSS in its GoLive Web authoring tool with technology from tiny Web browser maker Opera Software, whose chief technology officer first proposed CSS nine years ago. Opera maintains an active role in developing CSS through the World Wide Web Consortium (W3C).

But standards advocates said it was unclear whether Adobe's action could prod Microsoft into better CSS support, given the lack of browser competition.

It was "unclear [whether Microsoft could be prodded]", said standard advocates. They have a penchant for understatement, it seems. :)

Categories: technology
Posted by diego on October 10, 2003 at 1:44 AM

MT spam killer

In the comments to the entry on Sunday where I was talking about comment-spam, mal posted a link (thanks!) to this solution for movable type. Looks pretty good---essentially equivalent to what Yahoo! and Hotmail and other do to ensure that accounts are not being registered by a bot. I've downloaded it but not yet installed it, since I'd have to rebuild all of the static pages with the comments section included. Something to do over the weekend...

Categories: technology
Posted by diego on October 9, 2003 at 11:35 AM

more on rss-data

Jeremy expands on the idea of RSS-data. Roger posts an example. Dave agrees. Russ explains a bit more. I'm getting it, I think. Still slightly fuzzy. Roger's example is useful but I'd like to see an actual use case (say, with calendars) to understand it completely. I guess my main confusion is the relationship between this and namespaces--wouldn't this idea make namespaces irrelevant? And is that good or bad, or indifferent? (My answer: I don't know yet). As I said yesterday though, it sounds cool. :)

Categories: technology
Posted by diego on October 2, 2003 at 10:53 AM

breaking news: email is broken

Salon usually is ahead of the trend, but on this one they've lagged (quite) a bit: a new article, "email is broken talks about the problems that we all know. That said, they do have a refreshing view in the form of interviews to Internet pioneers (though not the usual suspects, which is also cool). A good read overall.

The current workarounds to the problem (all of which are mentioned in the article) don't even scratch the surface of what can be done though!

Categories: technology
Posted by diego on October 2, 2003 at 8:40 AM

only open-source--but only on the desktop

This caught my eye: Massachusetts is moving towards favoring open source software over over closed source.

I have a question: If they do that, then when their employees want to do a search on the web, which search engine, exactly, do they tell them to go to? Not Google of course. It's not open source. Not MSN. Not Yahoo. Er...

What? That a web application is less of an application? Why?

If you use something like, say, a Yahoo! service, is it less "use" because it's on the web? And why is it that it wouldn't matter then?

I am not saying that the policy is good or bad. Honest. I am just asking why is "open source" so important on the desktop but not on some other incredibly critical services that people use every day, all year long. And speaking of that, when was the last time that anyone got a look at the source code of NSI? I mean, they do happen to run the most critical services on the Internet...

Categories: technology
Posted by diego on October 2, 2003 at 12:07 AM

rss-data?

[via Dave]: Jeremy proposes RSS-Data. I'm sure my mind is still completely in another planet, because I've read his proposal three times and I don't get it.

It mentions namespaces. It mentions XML-RPC. If you use namespaces, why do you need XML-RPC? And how would a namespace define a "generic" data format when the discussion is specifically about "vertical" applications (which would, as far as I can see, require each their own namespace)? And if the XML-RPC stuff is used, it becomes really really generic. How is that "vertical" or specific to different types of data? I get the feeling that it might be some kind of tighter XML-RPC representation to perform web-services-style data transfer... but ... but...

But as I said, quite obviously I don't get it. At all. It will be interesting to see more info on this. Sounds cool.

Categories: technology
Posted by diego on October 1, 2003 at 11:58 PM

atom is atom is atom

On a break I wandered off to the Atom Wiki and looked at the naming process . And, surprise, a message from Morbus to the atom-syntax list seems to have finally opened up the floodgates. From the Wiki: Let's just use Atom:

We could go through the voting process for the hundredth time. We could say that, come September the 30th, this format is called Nota or or Zing, or whatever wins this vote... but with under 20 votes per name? We could go through all the rigmorale of properly vetting the candidate that actually wins, just to find that really it's no good anyway, and have to start voting for the [too much]+1th time.

But let's not, okay? Instead, Just Use Atom.

Yes. Most definitely. I agree. Not just because of the seemingly never-ending nature of the naming process, but also because the other names currently being proposed are dreadful.

Hopefully this will also re-ignite the "closure process" that is necessary for several parts of the Atom spec, which have been iterating with no end in sight.

Categories: technology
Posted by diego on September 26, 2003 at 11:13 AM

Karlin on AdSense

Karlin writes about Google AdSense in yesterday's Guardian:

I don't understand this smoked-salmon-socialist approach to personal websites. Many, if not most, of these people are the same ones who, until Adsense, loathed banner ads, not to mention the dreaded pop-up.

Now that they have caught the scent of cash, bloggers are more than happy to slap the damn things prominently on their webpages in the hope that their readers will do what they never do - click on through. As one blogger confessed on his blog, he hopes others don't use the banner-ad blocking software he uses, the faster to bring him his Google dosh.

I must admit that my immediate gut-reaction was similar, but it didn't last. I ended up taking the more err... capitalist approach of saying that, well, if people accept the ads on weblogs they read, then so be it. I also disagree with the blanket statement "ads are bad for weblogs". Her note of people that have turned around 180 degrees (people that previously derided news sites and such that used ads, to suddenly getting ad-religion) is good though. Ah, the ironies of technology.

Ads are appropriate in some contexts, and they do help people pay for their hosting and maybe make a little money on the side too. After all writing a weblog can take its effort. And if people are making some money with their ads, then it follows that readers must be interested in them, no?

Categories: technology
Posted by diego on September 19, 2003 at 6:17 PM

what's the solution again?

I was just watching this video of Steve Ballmer talking (supposedly) about how Microsoft is going to solve their security problems.

The summary goes something like this:

Blahblah blah security problems ... blahblah blah hackers get the information from our patches [yeah right!] ... blahblah blah to solve this we need innovation blahblah blah ... and customer education... blahblah blah ... innovation blahblah blah we believe viruses should be stopped before they get to the computer [read: we are not going to fix those memory overflows or our engineering processes, we are just going to give you chain and a couple of locks to put around your house... and if you don't like living in a cage, well, too bad.] blahblah blah ... innovation blahblah blah the whole industry needs to innovate blahblah the solution is innovation... blahblah blah innovation blah blah .... innovation [and it goes on like this]...

So, that's great! Apparently the solution to insecure runtime environments is innovation!

What's the URL for that? Or do I get it on a CD or what?

Seriously, though, I thought it was the performance of a spin-addicted politician, rather than a CEO of a technology company.

I would challenge anyone to explain in two short sentences what, exactly, Ballmer said.

You can't, because he didn't say anything. Pure rethoric. Lots of obvious points ("we need to improve the entire patch management process [...] we have to continue to improve". Yeah, no kidding, Steve). Again, pure rethoric. No content.

I suddenly remembered this excellent, excellent article by Cringely from a couple of weeks ago: The Innovator's Ball. Note this quote:

[...] there is another issue here, one that is hardly ever mentioned and that's the coining of the term "innovation." This word, which was hardly used at all until two or three years ago, feels to me like a propaganda campaign and a successful one at that, dominating discussion in the computer industry. I think Microsoft did this intentionally, for they are the ones who seem to continually use the word. But what does it mean? And how is it different from what we might have said before? I think the word they are replacing is "invention." Bill Shockley invented the transistor, Gordon Moore and Bob Noyce invented the integrated circuit, Ted Hof invented the microprocessor. Of course others claimed to have done those same three things, but the goal was always invention. Only now we innovate, which is deliberately vague but seems to stop somewhere short of invention. Innovators have wiggle room. They can steal ideas, for example, and pawn them off as their own. That's the intersection of innovation and sharp business.

Yes, Microsoft is an innovator and I don't think that is good.

Exactly.

I can't help but compare it to McNealy's keynote the other day. While McNealy is a bit dry as a speaker, he actually talks about solutions. He doesn't descend into useless generalities (I can imagine see Ballmer talking about famine problems in Africa: "eating some food every day is good to stay alive... we need more innovation... people shouldn't have problems to get food.... innovation... we need to improve things ... innovation..."). McNealy doesn't say "let's get more customer education." He doesn't imply that "the way to fix viruses is to hide your computer in the closet and disconnect it from the Internet". He doesn't utter the word innovation every two milliseconds.

It's sad that Microsoft, instead of using their tremendous resources (both human and financial) to actually fix problems and invent new stuff and create new ways of thinking, are more interested in spinning the situation and proposing that somehow the best way to create security is not to fix the obvious and widespread problems in the architecture of Windows, but rather to "not fix the backdoor, but secure the front door" (whatever that means--If the "front door" is also running Windows we'd have a problem again, wouldn't we?).

What's next? Force everyone to stand heavy weaponry next to their Ethernet cards, you know, just in case? (With "customer education" of course: "if your PC is attacked, shoot the cable immediately!" and so on...)

Anyway.

Categories: technology
Posted by diego on September 19, 2003 at 12:54 PM

microsoft-motorola announcement coming up

Mobitopia

ms-moto-att.jpg The Wall Street Journal is reporting today that Microsoft and Motorola are set to announce their partnership tomorrow:

[T]he partnership also highlights the challenges Microsoft faces in that quest as its new partner loses market share amid stiff competition in the mobile-phone business.

The companies expect to announce that Motorola will begin building phones that run on Microsoft's Mobile Windows software, according to executives at the companies. The phone, a "clamshell" style unit, will be the first phone sold in the U.S. that runs the Microsoft software.

AT&T Wireless Services Inc. plans to offer the unit to its subscribers beginning later this year, according to an executive at the mobile operator.

[...]

With Motorola as a partner, Microsoft now has support from the world's second-largest handset maker, behind market leader Nokia Corp. of Finland. To date, Microsoft has relied heavily on HTC Corp. of Taiwan to manufacture its phones, which are mostly sold in Europe.

Before Motorola, the only other major handset maker to announce support for MS's Smartphone has been Samsung. Samsung, however, has pushed back the release of their MS-powered phone repeatedly (according to them, due to technical problems). I wonder if Motorola is going to run into the same problems--probably not, since the MS mobile phone software has gotten relatively decent. As the article notes, apparently the announcement will include AT&T Wireless as a carrier, but Orange had also been rumored to be ready to deploy Motorola-MS phones.

We'll have to wait until tomorrow to see if Motorola's phone will feature any of the good stuff we've gotten used to with Symbian (such as Bluetooth). Based on the picture above, it seems that at least it will have a built-in camera with what I expect will be support for Windows Media and such. Various news sites are reporting, however, that it doesn't have Bluetooth, Java, or a camera. Strange.

Categories: technology
Posted by diego on September 15, 2003 at 2:13 PM

ping yourself!

Unbelievable. How a solution can be staring you right in the face and still not see it.

Call it a blinding flash of the obvious.

One of the problems I tend to think about when writing follow-ups to things I've already discussed is that, while you can point back to a previous entry, you don't really want to go back and start modifying the original entry to point back to the follow up, allowing me to follow the evolution of an idea for example. Now, I think there are plugins for MT that let you specify "related" entries, but I've never gotten around to investigating them (I tend to prefer the minimal amount of plugins and extensions necessary, both on software I use and software I write).

Now, as I was posting my previous entry on RSS discovery, I just realized that I could simply ping the entry through trackback and so provide a pointer to the follow up. Simple. Effective. To the point.

I can come up with all sorts of reasons why I've never seen this before (like "Oh, well, you've always thought of trackback as a useful tool to connect different weblogs"), but the reality is that this is so incredibly obvious that... I can't understand at all why I didn't see it before. :-)

Anyway.

Isn't trackback great?

Categories: technology
Posted by diego on September 14, 2003 at 6:50 PM

robotize!

AIBO-ERS7As usual, for cultural (rather than technological) reasons, Japan is on the forefront of these kinds of things:

Mrs Tanaka is 84. Today, as usual, she wakes just before 7am, slips on her dressing gown and flips a switch to start water boiling for her first green tea of the day. She's about to get dressed when she pauses. She turns to the low table near the door, where a soft toy sits incongruously, and greets it in her distinctive west-Japan accent.

"Good morning Teddy. How are you today?" "Pretty good, thanks Tanaka-san," comes the reply. "Have you remembered to take your pills? It's the pink ones this morning," the robot bear continues.

A scene from AI 2 or a vision of a slightly over-cooked future nanny state? Actually, it's here and now in Japan.

And, yes, I'm one of those that would get an AIBO if I could...

Btw, I just realized that William Gibson, for all his understanding of Japanese culture and his uncanny ability to "see beyond", has never made much of a deal of personal robots in his novels or stories--even if one might argue that the trend is just too new, it precedes both All Tomorrow's Parties and Pattern Recognition, so a tiny mention would've been expected. Or maybe by now they are too mainstream to qualify for the wonderful techno-kitchness of some of his characters. I wonder.

Categories: technology
Posted by diego on September 12, 2003 at 8:12 PM

the mobile platform wars

Mobitopia

So it only took a couple of days since Motorola bailed out of Symbian for a rumor to surface that they would be releasing a "Microsoft-powered" phone, sold through Orange, later this month. Recently there was another rumor (or news? I can't find a link) that Psion was moving to WinCE for its mobile devices. In the meantime, Linux is making inroads into all sorts of devices. Oh, and, by the way, PalmOS is down but not out yet.

See a pattern emerging here?

Call it the mobile platform wars. In which Symbian has the tactical advantage, but is, strategically, in an entirely different position.

Symbian, by design, allows its licensees to tailor their offering heavily for different devices, which is great for licensees short-term (allows them to obtain early lock-in on features), but not so great for Symbian as a long-term platform. Long term, a platform cannot survive like that, and by extension neither can its licensees. The platform splinters irreversibly, because even though the licensees achieve short-term early lock-in on features, the platform itself has no lock-in.

Symbian won over Palm through faster innovation and larger deployments, but now Symbian is the incumbent, and the game is different. The new entrants are not competing on features, but on platform homogeneity.

It has happened before: Look at UNIX in the 80s.

In theory, you could port applications between UNIX OSes by sharing more than 80% of the code between them through a "standard" called POSIX.

In practice, almost no one did it.

And so the POSIX-UNIX "standard" allowed itself to be overtaken by both Linux and NT. Because once platforms are established, it all goes back to third-party developer support. Why? Because users care about applications and devices, not about OSes. They don't care if, say, the memory space is 32-bit flat. Developers do. Which is why third-party development should be active and growing for any long-term platform.

Which requires a vibrant community. Which requires tinkerers and small developers, as well as big developers. Which requires simplicity and portability within the platform, and a low-cost of entry (read: free, well-documented, well-supported, entry-level tools).

And is it a coincidence that, again, both a Windows variant and Linux are emerging as the greatest threat to an innovative platform? I don't think so.

Netscape, by the way, made similar mistakes with regards to developers. And they played a small but important role in their fall from grace.

But Symbian is improving, and listening. It isn't over. Yet.

Even if it comes to the worst, I can't see Symbian ceasing to exist. When you're talking about millions of devices sheer volume wins, so it's quite possible that in one or two years Nokia will just end up owning Symbian outright and it just will be Nokia vs. Microsoft.

But since Nokia is one platform and one vendor, Java would be a better choice.

Paradoxically, excellent Java support might allow Symbian to prosper by providing the best Java mobile platform around. Palm might yet come around as well and figure out that Java is their best weapon to fight the Microsoft/Linux juggernaut.

And in that case, once again, the only thing standing between us and yet another monoculture would be that sweet smell of digital coffee.

Categories: technology
Posted by diego on September 5, 2003 at 12:11 PM

spam gets weird

Received today:

Dimensional Warp Generator Needed

Hello,
I'm a time traveler stuck here in 2003. Since nobody here seems to be able to get me what I need (safely here to me), I will have to build a simple time travel circuit to get where I need myself. I am going to need an easy to follow picture diagram for a simple time travel circut, which can be built out of (readily available) parts here in 2003. Please email me any schematics you have. I will pay good money for anything you send me I can use. Or if you have the rechargeable AMD dimensional warp generator wrist watch unit available, and are 100% certain you have a (secure) means of delivering it to me please also reply. Send a separate email to me at: [someemailaddress].
Do not reply back directly to this email as it will only be bounced back to you.

Thank You

LOL! I had received this once before a while ago. I assume that the purpose of this is to add your email address to a list for selling it later... but if they already have the email to send it to me... what are they doing? Confirming it? Who knows. Anyway. Definitely a weird and quirky spam-meme. Even weirded than the infamous Nigerian scam!

Ok back to work.

Categories: technology
Posted by diego on September 2, 2003 at 10:26 AM

a linux tale

sshot-2-small.pngI started using linux back in early 1994, when I got a distribution on eight high-density floppy disks, with one of the first versions of LILO (the kernel was the "classic" 0.91pl14 if I remember correctly). I had been "converted" to UNIX a little bit earlier when I got my first job and I learned how the UNIX kernel worked, its design principles, and its inherent beauty. :-) Suddenly, I could see. I became a UNIXhead (TM), but of course (as mostly everyone else) I had to deal with Windows.

I used Linux heavily at work and at home for the next few years, mostly in dual-boot environments (before Red Hat, my favorite was the Slackware distribution-of-distributions, and the popular-for-a-while Plug and Play Linux from Yggdrasil Computing, remember that one?), until the versions of Java got completely out of sync (particularly on the graphics side of things, with the arrival of Java2D) and I couldn't use it as a decent Java platform anymore. In my last company in the US I installed Linux (RedHat 6 methinks) for use in server-side applications and as a NAT/transparent firewall until the company hired a sysadmin and they decided to simplify (right...) and went with an all-Windows infrastructure.

By the time I came left the US the Linux port of Java was respectable (certainly for server-side work) but a lot (most?) of what I was doing was client-side, so, again, no luck. I stayed with Windows (besides, I needed to make sure that then-spaces ran properly on Windows first). I switched to WinXP at the beginning of 2002 after a friend convinced me that, yes, it was slightly better than Windows 2000, and cleartype made it a good option on LCD displays. Then over the last two years or so I used Linux for testing, basically using an old version since getting a new version was too much hassle, and I (thought I) didn't really have the time, and I guess that inertia got the better of me as well.

But then...

Fast-forward to last week and the attack of the Windows worms that wrecked my mail (Since I activated the filters on my server 24 hours ago, 5000 messages---yes, that's five thousand--- have been rejected), and me getting pissed off enough at Microsoft that I thought that maybe change was in the cards. So I downloaded Red Hat 9 (took about 5-6 hours I guess?) burned the install ISOs, backed up my notebook, and started the install last night after I got the email situation sorted out.

I started selecting the packages to install, and in the end I decided that the "Install Everything" (4.7 GB) option was probably best, since I wanted to check out KDE, Gnome, etc, and I always end up using the development tools in Linux anyway. The full install took maybe 3 hours. After it was done, a tool called "kuzdu" detected the hardware that hadn't already been configured (including a 3COM modem, and the sound card, among other things) and after that X launched with no problems. I had to do some tweaking later to optimize the use of the LCD (which is the only thing that wasn't properly detected, and Thinkpads have pretty good LCD dpi resolution), but aside from that everything was fine. The next step was configuring my 802.11 (aka WiFi) card.

Coincidentally, I had gone through the config process of a WiFi adapter against my WAP only last week (with another machine), but that machine was running Windows XP Pro. It was a nightmare. The configuration options were impossible to decipher, including typing the WEP key. In all, it took more than half an hour, and in the end the connection was flaky: it kept dropping and had to be manually reconnected, which was, as you might imagine, a royal pain.

So I was a bit apprehensive about the Linux config, as it turns out, needlessly so. I just had to go to Main > System Settings > Network and the network device control panel showed up. I selected the wireless card, which had already been detected (a Linksys WPC-11 Type-II PC Card, which Linux identified by its chip, Orinoco), type in the HEX WEP key, choose DHCP to use my NAT gateway, and bingo. Select Save, and restart network interfaces. Done.

It just worked. After that, even though sometimes the connection dropped (rare, but possible) it reconnected on its own (obviously, who came up with the idea of manual reconnect in WinXP??).

I was slightly shocked. Could it really be that easy? It was 2 am, my eyes burned slightly, but I got more ambitious. What else? I thought. Now I had Internet access...

I started with Firebird, to replace the bloated sluggishness of Mozilla. Loaded Evolution just for fun <wink> and closed it again, while I got Firebird set up, which magically replaced my links to mozilla itself (I'm not even sure how that happened... something to do with the Nautilus desktop--but I don't care). After Firebird I downloaded JDK 1.4.2 for Linux. Strangely enough, all the Linux distributions I've seen set as default Java install one of the crappy open-source implementations (don't get me wrong, I find the work done on them amazing, but the truth is that they're not up to par in terms of Swing, Java2D, etc, and they're always one or two major versions behind). I suppose that might have to do with Sun-licensing stuff. Anyway, so I got Sun's distribution (RPM), installed it, all fine. The default java still pointed somewhere else but I didn't want to spend time then to look at how to replace it.

Now that I had proper Java 1.4.2 I got the latest binary of clevercactus (internal release :-)) and ran it. Set up a breeze, and it might have been a subjective hallucination but I thought that it looked excellent, and performed flawlessly. I was impressed. Heh.

Okay, enough with the self-promotion! After I got cc running I configured it to get my email, etc (the cactus-to-cactus sync is still not complete) and then moved on to development environment. I got the latest version of IDEA (3.0.5--plus updated website!) which I still prefer to Eclipse for reasons that are probably irrational while making perfect sense only to me. That took a bit longer (longer download, a couple of more things to set up).

Then some cvs configuration (against the CVS server for sources) and I was basically done. I still have to look for a few other things (among them, an IRC client, which I'm sure is around here somewhere), and it should be enough.

So now I've just finished updating the packages in my system with up2date (take that, Windows update! :-)) and am now enjoying a number of luxuries that I had forgotten, for example:

  • A decent shell, with proper scripting and regular expressions (bash2)
  • A decent built-in text editor (vi--I'm not an emacs kind of person)
  • A decent multi-screen window system (X)
  • A decent thread/task manager (ie., an OS Kernel that doesn't kill the entire machine even when one process is running at 100%)
  • Good screensavers :-)
  • gcc at my fingertips!
Among other things, like the cool desktop gizmos that Gnome/Red Hat has (I'll comment on that in another entry). And all of that, with excellent, zero-config Windows compatibility. Example: I go to Main > Network Servers open the Windows server. Get files. Then copy over a PPT file from the Win machine, double clicked, it opened. I can run the presentation, edit it, whatever. Everything just works. (Okay, and when it doesn't, there's tons of docs on how to do things, much more accessible than their Windows equivalents). It's definitely very, very close to being something that absolutely anyone could set up (the only problem that remains is that, when something goes wrong, you end up having to go to consoles, which I actually enjoy at times, but most people would be completely baffled by it. In Windows you just get a catastrophic error, and that's that. The console is harder to use, but it actually lets you fix the problem. :-))

Probably the only thing that I truly miss is cleartype within Firebird (Mozilla seems to have better font display for some reason). But so what.

I still have WinXP on the other machine, but I am beginning to wonder if that's necessary too. If I get VMWare for Linux, and then run WinXP in there... not now though, this has taken about 12 hours total--not bad, but there's a ton of things to do aside from this... we'll see.

One more thing: It's good to be back :-)

Categories: technology
Posted by diego on August 26, 2003 at 1:39 PM

AOL goes blogging

Finally, it happened. Now let's see how long it takes Microsoft to jump in the fray... Yahoo! has already started dipping its toes....

Categories: technology
Posted by diego on August 26, 2003 at 2:57 AM

a few minutes ago...

...I ran Gnome on Red Hat 9 for the first time. The last version I had used, for testing clevercactus, was 7.2.

I. Am. Speechless.

Or, in the spirit of what Comic Book Guy said once: "Vision ....blurring .... balance... failing... can't... go on.... describing.... symptoms...!"

(bonk)

Categories: technology
Posted by diego on August 25, 2003 at 10:54 PM

config-day

At around noon, I thought the filters were more or less working. I was right. They worked so well, in fact, that even I wasn't allowed to send any email. Hmpf. More configuration. Verifying whether it was a problem with clevercactus in particular (it wasn't). Removed a couple of hostname checks that, while useful, prevent clients behind firewalls from sending email (because it can't do a reverse lookup on their name). Then spent another bunch of time until I realized that the pop-before-smtp process had died at some point, and that meant my client wasn't being properly authorized.

Anyway. On to Linux now. Verified that my 802.11 card works ok. Ready. Funny that a year ago I had shed the first layer of monopoly-skin (and I was using spaces back then, even though I hadn't mentioned it yet :-)).

Categories: technology
Posted by diego on August 25, 2003 at 7:11 PM

mail's back

Okay, so, I admit, I wasn't ready to ditch email yet. This morning, the reasons remained: if I loaded my mail server I suddenly received a flood of messages, including the virus, spam, and "rejected" messages from addresses that had received the virus with my own address spoofed.

But even as the problems remained, I needed email, not least to reply to the clevercactus-dev list, and even do some work. Last week's crisis led me to think in new directions, and maybe we'll be able to come up with a good solution for this problem (or part of it). In the meantime, I had to get my mail back. I had no choice.

So I breathed deeply and started looking for configuration options for postfix, the mail server that I use. I found good information here, here and especially here. I started adding options and it took me some time to get them running, in particular the regular expressions that parse both the headers and the body of the message were a bit of a pain to get right, as usual. (I am now, for example, rejecting EXE, PIF, BAT and other MS-virus-related attachments, knowledge of MIME and how it is usually done has its uses :)).

I still have to tweak things a bit, but in principle it should be back to normal. Interestingly enough, most of the messages are being rejected with the error "Helo command rejected: need fully-qualified hostname". I wonder if this will affect legitimate email (I am not entirely sure which of the postfix settings is requiring this, maybe it's "reject_non_fqdn_sender" or "reject_non_fqdn_recipient" or "reject_unknown_client"...).

If you're trying to contact me via email and you can't, leave a comment here--also, if I haven't replied in the last few days just give me a few hours as I go through my queue.

Update: in the roughly three hours since I completed the filtering configuration, postfix has rejected 1128 emails, most of them infected with the Sobig virus. One thousand one hundred twenty-eight! Jeez. Anyway, it feels weird now. Like the quiet right after the storm has passed.

(In the time it took to write the previous paragraph, another nine invalid emails were bounced!)

Categories: technology
Posted by diego on August 25, 2003 at 10:20 AM

on google and the markets

From News.com:

Despite frenzied speculation of an imminent Google public offering, company co-founder Sergey Brin said he's still casually debating the pros and cons with board members and has not yet set a date.
Given recent irrational behavior on part of the markets (you know, P/E ratios that don't make any sense... stocks of money-losing companies that cuadruple in value in a matter of weeks... things of that nature), a Google IPO could easily set off a chain reaction. It wouldn't last though--there's not a lot of money left to lose--and a few people would make a lot of money.

That aside, I was thinking today: how much more possible is it that Google will be acquired? .... No, not AOL--not enough cash, too many problems. But Microsoft? I admit, it's far fetched, highly unlikely, etcetera. But they could make "an offer they couldn't refuse". It's left as an excercise for the reader to figure out, in this hypothetical scenario, who would be Don Corleone, and who would be Luca Brasi. :-))

Categories: technology
Posted by diego on August 25, 2003 at 1:55 AM

novell's strategy

It's been a couple of weeks since Novell's announcement that it was acquiring Ximian. Since then, it has posted its results for the most recent quarter, including a net loss and a number of layoffs. Some of the comments to my previous entry were interesting in that they mentioned Novell's "Linux moves" and I must admit that I was sort of blindsided by this. But as this interview with Novell's vice-chairman Chris Stone makes clear, the Linux play is something that Novell has been working on for quite a while.

Putting two and two together now, it seems that Novell wants to mount a strong challenge to a) generic Linux, by leveraging their NetWare brand and technologies, including Ximian's Red Carpet software and b) complement that with a competitive move against Microsoft at a higher level (UI, messaging) through Ximian products. In the interview it becomes clear that there is a nice interaction between what Ximian and what Novell has, with no overlap. Even the mono project (to create an open-source implementation of a .Net-alike environment) fits: Novel understands development tools, toolkits and environments quite well, since Netware and its Netware Loadable Modules (NLMs) where once an important development platform for networked services.

I have to say: it actually seems to make sense. And all of it, without hype or pretense. Amazing.

Categories: technology
Posted by diego on August 24, 2003 at 3:59 PM

openoffice for OS X... in 2006

Speaking of, er, alternatives. Got this on the #mobitopia channel yesterday: OpenOffice for Mac OS X delayed until 2006.

Plus the new version of OO by 2005? What?!? The Linux kernel itself has had delays, but this is too much.

Ridiculous.

And then we make fun of Microsoft for running over 2 or 3-year deadlines...

Maybe it's time for a renewed Corel to come up with some real competition for MS Office?

Categories: technology
Posted by diego on August 24, 2003 at 2:35 PM

now downloading...

...all 1.4 GB of Red Hat 9. The plan is to attempt a full, clean install of it on my Thinkpad laptop. No Windows partition whatsoever. No FAT or NTFS. Between IDEA, OpenOffice, Firebird, and clevercactus, I should have all I need. We'll see how it goes.

Categories: technology
Posted by diego on August 24, 2003 at 2:02 PM

it's not the users, part 2

I was typing this as a comment but it got to be just too big. So here it goes. References for this entry are comments in my entry on software, developers, and users, in particular those by Bo and Roger.

First, thanks everyone for the comments. Now, my replies:

Bo said:

Are you willing to guarantee that your program behaves exactly the way it's supposed to on the infinite configurations of sofware and hardware out there? Are you willing to guarantee your program won't one day do something stupid leading to great monetary loss? What will be the EULA on clever cactus?
While liabilities and guarantees are legal matters, what I am talking about is simply a question of taking responsibility. Instead of taking responsibility and saying "We'll fix this for you, it's our fault", Microsoft says, "It's YOUR fault. But we'll see what we can do."

And it can be fixed. For example, any content downloaded from the Internet would be placed under quarantine, and as Christian said, scanned. Even then, it could be run within a sandbox, for example, NOT allowing access to your entire web browsing history, cookies, and all the network drives to infect. I don't know, there are a thousand things that could be done, but Microsoft hasn't done ANY of them since they started "trustworthy computing" almost two years ago.

In fact, can anyone name a single clearly defined advance that "trustworthy computing" has brought to Windows and/or Office? This would have to be something added to Win XP SP 1 or one of the updates to Office XP. And how is it that well after this "initiative" begun, they still keep finding buffer overflows all over the place, sometimes across ALL VERSIONS OF WINDOWS (!) including the "ultra secure", recently released Windows 2003? Yes, they've got millions of lines of code. But also have thousands of developers. Surely asking each person to run purify and a properly defined set of tests on each of their modules is not too much to ask for.

Even their new developments are not secure, they come up with C#, which is "secure like Java" and they break the security of the environment by letting the developer mess with memory directly. Result: I guarantee that there will be C# buffer overflow worms. Why is that? Because they don't care about security. Don't tell me that it's a compatibility problem, please. A company that is willing to do this doesn't care about compatibility too much.

Security is low priority for them, trustworthy computing notwhistanding. They think it's a "feature", and optional as such. Here's the proof.

So, as far as clevercactus is concerned, I can say what I will NOT do. I will not, facing a widespread security problem, start a "user education campaign", like Microsoft is doing this week. That is an insult. I will use the money on development. I will try to come up with innovative solutions for the problem, and I will try to understand how it can be solved, not just for the next version, but for current versions. I will ask users for input. And, you know, if I had 50 billion dollars in cash (heh) I would think about how to use it properly. Assumming that this is some intractable problem (it's not) you could fund a good number of crash research projects to find good fixes, no?

I don't know, I guess that I could not provide a guarantee (in terms of legal liability, no small company could, probably, and an EULA would reflect that), but at least I would be honest and humble in the face of a mistake, and I'd do my best to fix it, and communicate that to users (and get their input), instead of subjecting them to an "education campaign" which essentially arrogantly says that "we're fixing it up, you just get educated while we come up with something for morons like you", and "oh, here it is. BlahBlah XP is more secure. Pay up." and then have it blow up all over again. This is not a one-time problem. This is a pattern of problems that keeps showing up, over and over, and it's been happening for quite a while now. If it happens once, maybe even twice, it's an honest mistake. If the exact same thing happens three times, except that it gets worse every time, well then...

And, of course, as I've said before, Microsoft has even more responsibility because of its dominant position in the market, and its immense resources.

Roger said, essentially as the core of his argument:

People who open unknown attachments are jumping the curb. People who don't run antivirus software aren't wearing seatbelts. Safety is in the steps you take to protect yourself, not the responsibilities you shift to others.

Now, I can add to what Christian said, by taking each sentence in turn.

"People who open unknown attachments are jumping the curb."

No. Opening an attachment is trivial, it's two clicks and one confusing warning message, and you can get an amazing amount of damage from a simple action. "Jumping the curb" implies a lot, not least of which is the screams of the people you're running over, not to mention crashing trees, etc. I'm not making a literal comparison, I'm just saying that something that causes *so much damage* should be hard to do and to keep doing. That is not the case with attachments.

"People who don't run antivirus software aren't wearing seatbelts."

Conversely from my previous point, "safety" should be easy to obtain. Putting on the seatbelt is easy. Installing, maintaining, and updating AV software is hard. And expensive. The seatbelt comes built in. Easy to use. And it just works. AV software is a long way from that.

On PCs, particularly on Windows PCs, doing a lot of damage is easy, while avoiding damage is hard and expensive. It should be the other way around.

This ties in to Bo's first point of how hard it is to certify different configs, platforms, etc. I agree with Bo that it's difficult. What I am saying is that: a) Microsoft's gut reaction of "people are wrong in opening attachments" should change. If people keep doing it, then they are right, and Microsoft is wrong in assuming they won't. b) Doing damage should be difficult, increasing safety should be easy. Consider how difficult it is today to configure Internet Security options in IE, or how braindead is Outlook XP's "security" with attachments (Executables are not shown. That's it.)

I am not asking that Microsoft be perfect. I know the problems they face, they have a huge market, etc. I am just asking for something simple: I would just like to a) be treated with respect, as a user, that is, when there's a problem like this, please don't say "Oh, users are idiots, they don't do what we tell them to do," and b) that they, rather, try to find a way of solving the problem while maintaining functionality, that is, I would like to see progress in these areas, where solutions are truly solutions, and the way to stop an engine from blowing up is fixing the engine rather than preventing people from ever turning it on.

I don't think it's too much to ask for, no?

Categories: technology
Posted by diego on August 22, 2003 at 6:06 PM

an evolvable military platform

From Wired The best defense is a good upgrade on the USS R. Reagan--the aircraft carrier, not the president :-).

Categories: technology
Posted by diego on August 21, 2003 at 9:58 PM

it's not the users

Okay, Microsoft news of the day: first, they warn of three new "critical" IE flaws. Then they say that, by the way, Windows patches might become automatic. Have you ever read the licensing agreement of Microsoft software? It gives them rights to do almost whatever they want. Now, automated. Does anyone think that they would use it for something useful? Why would they want to deliver patches automatically, since patches don't seem to work anyway?

On the topic, Scott says:

So tell me when is someone going to sue Microsoft in a class action lawsuit about shoddy security practices? Couldn't this be a tobacco lawsuit kind of thing?
I am incredibly surprised that no one in the US wants to take Microsoft up to task with this. They've sued McDonald's for making people fat, for crying out loud.

In part, I think, this comes from a certain misplaced perception that is quite widespread. It goes like this: "Oh sure, Microsoft is bad, but users are part of the problem too. You know, if they just stopped opening attachments... and it's not like they've no warnings...".

Users are not the problem. Until we, in the software community, take responsibility for what we produce, this isn't gonna get better. Let me put it another way, an example that I came up on an IM conversation today.

Say that a car company, for example, Fanstastic Motors, creates a particular type of car that, when driven beyond 70 MPH, becomes so unstable that it rolls over and explodes. Because of this, everyone has warnings. You get a course, that says that you should never drive beyond 70 MPH. Whenever speed increases, you get warnings on the console. Mechanics that you meet on the street explain to you how you should never, ever drive fast. Now, everyone knows that it's just gonna happen that people will, intentionally, or by mistake, drive over the limit. People will die.

Now, in that case, would you blame Fantastic Motors, or say "Oh it's the drivers that never learn".

If you think that my example was ridiculous, think again. Maybe you remember that Ford was in seriously hot water a couple of years ago because of tire problems with their SUVs. The reason? The tires were being used "beyond spec". They were disintegrating mid-trip, causing catastrophic accidents. Of course, Ford told users to check tire pressure. Of course Ford told people not to do X and Y. Of course people were told to check their car regularly. And, of course, drivers, users, sometimes forgot, with terrible consequences.

Back then, was anyone saying, "oh, these drivers, they never learn."?

No.

Software is NOT different.

Update: The Register has an example that is, well... exactly like mine. Heh.

I always try to tell people, when they say, "I don't know what I did. The computer stopped working." I always ask them: 'If your fridge stops working, does it ever occur to you to say "I don't know what I did, the fridge stopped working". They say 'No. I'd say, "The fridge isn't working.''

There's a big difference.

I know of no other industry in which the customer willingly takes the blame for the stupidity of the provider of the good or service. Customer support people reinforce this tendency ('Are you sure the computer is connected to the power outlet?'). The way software is designed reinforces this tendency ('Please read the following carefully and select the appropriate option' --to which you could almost add 'you moron!'). Users get blamed all the time. Well. Sometimes the user might be at fault, but until software actually works properly, that's definitely not were we should start.

Users are not at fault, We, the developers, are. And the biggest of all is Microsoft. They should be ashamed. They should get their act together. It's called corporate responsibility. Which in their case is even bigger, because they own not one, but several monopolies.

I just turned on the email server for a moment to see what was happening. And I'm still getting one email per minute.

And you know what? It's not that "email is broken".

It's not the "users' fault".

It's Microsoft's.

Categories: technology
Posted by diego on August 21, 2003 at 9:15 PM

symbian's numbers

Boom in Symbian handset shipments:

The cell-phone operating system maker announced on Thursday that 2.68 million Symbian-based handsets based on its software were shipped in the first six months of 2003. This is more than ten times as many as were sold in the same period a year ago.Cool! Which reminds me, pretty soon I'll be posting my part two of my symbian dev intro guide.
Plus: Russ's opinion on the news, from mobitopia.

Categories: technology
Posted by diego on August 21, 2003 at 7:08 PM

email-less

I have just shut down my email server.

At a minimum, I plan to keep the server down for one day. I will soon set up a CGI form to send me emails. It's a waste of time, but it's going to be nothing compared to what I'm wasting with these virus-ridden-emails. Matthew, who develops and runs AlienCamel, offered to help. Thanks! I will seriously consider whether to switch to something like AlienCamel, or simply ditch email, or what. I will update here on the solutions I find, if any. For the moment, just don't send me emails, as they will bounce. I will post again when the CGI form is ready.

Oh, and BTW, I've received very little spam these last three days. What is up with that?

Categories: technology
Posted by diego on August 21, 2003 at 1:12 PM

upgrade or die

Wow. From a News.com article:

As of Oct. 15, users of Microsoft's free Web-based MSN Messenger and its Windows XP-based Windows Messenger will need to upgrade their software to a newer version or be shut out of the service, the software giant said Wednesday. MSN Messenger users will need to upgrade to version 5.0 of higher; Windows Messengers customers will need to upgrade to version 4.7.2009 or higher; and consumers with MSN Messenger for Mac OS X will have to use version 3.5 or higher. The last MSN Messenger to be released was version 6.

According to Microsoft spokesman Sean Sundwall, "Security issues that could be posed (on older versions) require us to force an upgrade." He declined to detail the security issue, saying disclosure would "put customers at undue risk."

Meanwhile, Oct. 15 also will mark the deadline for Trillian support for MSN Messenger. Trillian is software that integrates multiple IM clients into a common interface. While it doesn't enable IM services to communicate directly with one another, it lets people view all of their buddy lists from various services under one window.

Now, ain't that a nice thing to do. It doesn't matter that people don't want to "upgrade" every two seconds. It doesn't matter that people are sick and tired of "security patches". Except now it's not just "upgrades" for "security reasons". It's upgrade now or it stops working. The new version will be more secure, right? Just like the new versions of Outlook and Windows were supposed to be more secure? Good, good, I see. Let me just get this abacus here for my computational needs, just in case, you know...

They have to pull this kind of stunt right in the middle of two of their most widespread security crises ever? Oh, right, of course, "security". I'm sure that wiping out Trillian connectivity in the process had nothing to do with that. Everyone will understand "security" these days. Sure. Sounds a lot like a "PATRIOT upgrade", if you know what I mean.

Oh, right, and I'm sure that this "upgrade" has nothing whatsoever to do with this, right? Of course not.

I am, quite simply, astonished that they are not more sensitive to their customers, to increasing interoperability, and to letting users choose which product they like best, dammit!

Yes, I'm still receiving one email a minute. Yes, I'm still incredibly pissed off at Microsoft.

The deadline approaches.

Categories: technology
Posted by diego on August 20, 2003 at 11:20 PM

I'm mad as hell... and I'll take it for one more day

Kevin Werbach is right. This last day could very well be the day email died. I am still getting one email per minute. I am even more pissed off at Microsoft. And even though I am seriously looking at how to do whitelisting/challenge-response in cc, I know that's only half of the solution since emails still clog my server inbox, and at 110 MB per day it's nothing to be sneered at.

If this keeps up I will ditch email. I'll give it one day, starting now. In the meantime I am looking at the latest version of Red Hat Linux. I still use Win XP as my OS, and I have to use it for development and such, but this is just abuse, even if I haven't been harrassed as others have been. The great Microsoft exodus should begin momentarily.

Update: From the comments, I guess that what I said wasn't clear (reading it over I admit it was a bit muddled :-)). The email I'm receiving has nothing whatsoever to do with me running Windows. I am being spammed with viruses from people that are infected. I am not contributing to the spread in any way. My email is received in a Linux box that I use for hosting. Essentially what Nex6 was saying in the comments. The only way I would have to stop these things from showing up in my mailbox is to put a virus filter at the SMTP level that would bounce to the sender when attempting to send the virus, but I don't have the software, I don't have the time to look for it, and I don't have the time to install it. Furthermore, I don't want to have to go through all this crap because of ActiveX and the shoddy security model of Windows. So.

Point number one is that if this keeps up by tomorrow I will disable my email account. Maybe switch to a new name. That will get the email bounced with no work on my part. I have specifically been thinking how I would simply stop using email at all. I am sure that it can be done. I just need to think of the cases that I want to cover, and how...

And then, point number two is to start ditching MS stuff whenever I kind, my own tiny bit of protesting. Like Scott says:

Outlook is a joke. No sane computer user today should use it. If your company makes you use it, go to your CEO and explain how much time and money his company is losing by using it. I use Eudora; there are several other good non-Microsoft products depending on what platform you're on.
Yes. Yes. Definitely. And clevercactus is one. But any client will do. The madness has to stop. Mass exodus now!

Categories: technology
Posted by diego on August 20, 2003 at 12:54 PM

yep, that's the one

A few hours ago I started getting about one message per minute (!) containing the good ol' Sobig Worm for Outlook. Good thing clevercactus doesn't get hit by it :-). Anyway, I was curious as to what had happened, and to the rescue comes a News.com article:

The Sobig e-mail virus that caused havoc two months ago has reappeared in a virulent new form, according to e-mail service provider MessageLabs.
Yep, that's the one. MessageLabs is right, it's baaack, but worse. Much worse. So far the subject lines I've seen include: "Re: Approved" (the classic), "Thank you!", "Re: Thank you!", "Re: Details", "Re: that movie" and "Re: wicked screensaver".

Jeez. Such a waste of bandwidth, processing power, and time. Hopefully with this and other high-profile, recently noted occurrences Microsoft will finally take note and get their act together. Whatever they've done until now is clearly not enough.

Update: in a comment, Juan Cruz was pointing out a slashdot thread on this new Sobig variant, and mentioning that some people are saying this is the one that is supposed to erase the effects of the previous worm that was making the rounds last week. However, they're not the same. See here. This new Sobig worm is just incredibly annoying, and it has no redemptory qualities. :-)

Update 2: Btw, I'm still being hit by about one message per minute. Simple calculation: the worm is about 80 KB. That means in a day I'll get 24 x 60 x 80 KB = 110 MB give or take a few KB. One hundred and ten megabytes of traffic!!. I am really pissed off at MS. The addresses are spoofed, and at times my address is being spoofed, so I get rejected viruses from people that were receiving it. And I don't even want to multiply... say, by, 1% or email users that might be infected... say... five million? Ugh.

I can only wish MS will get hit by this worm as well, just as me and countless others are, and that will make them to realize what a mess this is, and force them to fix it.

Categories: technology
Posted by diego on August 19, 2003 at 5:17 PM

flame warriors

[via Scripting News]: Flame Warriors, a taxonomy of people's roles on online discussions. Heh.

Categories: technology
Posted by diego on August 16, 2003 at 11:47 AM

don't push!

Scott on Wired's "Push" story and a few other things:

In its heyday, Wired magazine gave the entire technology and Internet press a steady stream of wacky, outrageous material to react to. On the blog he has created to accompany his new history of Wired, "Wired: A Romance" (Andrew Leonard's Salon review is here), Gary Wolf is posting some reminiscences and other Wired miscellany.
I have to agree with his judgment that Wired's worst story ever was the "Push" cover story he was credited as co-author of. Wolf's recollections of how that absurd piece of puffery came into existence is illuminating and worth reading; Wired, it seems, was even more seat-of-the-pants in its editorial process than those of us on the outside could tell. I'll stand by my assessment of February, 1997, that the story wounded the publication's credibility. But reading Wolf's account, you can't help feeling a little more charitable toward the people responsible for the open-ended, improvisatory provocation that was the Wired game. Viewed as a moment rather than a movement, it all seems a little funnier and less heinous. After all, the next three years would see far vaster corporate scams unfold -- and ones with far less style.

I remember this story really well--and I have to admit that it really had me going for some time after. Of course, reality settled in pretty soon (not that Wired ever published a revision of some kind though--and they should have! A story as big, with the covered changed to say " don'tpush" or something). Misinformation was not spread "on purpose" but it was all part of the big machinery of hype that took over the tech world during those times; Wired just became the amplifier for all sorts of madness that was going on in The Valley. Strange how these feedback cycles can occur--and interesting how the process behind them eventually comes to light, to the everlasting wonder and amazement of all. :-)

Categories: technology
Posted by diego on August 15, 2003 at 9:28 PM

"weblog" is one word

An article on weblogs in this week's Economist. Some notes of interest, but aside from all that, can I say something? (Diego asks, then Diego replies: Yes, of course you can! Heh). So here goes: Will some publications please stop writing "weblog" as "web log" (note the space). News.com does this often. The Economist has apparently followed suit. "Web log" reads... broken. Just write "weblog" or "blog" and be done with it. It's one word, not two. Or will they now start writing "cyber space" too?

Categories: technology
Posted by diego on August 14, 2003 at 8:48 PM

two cool books

Today (finally!) I got two books that I needed for my thesis (or is it "wanted" instead of "needed"?). One of them is Packet Communication Robert Metcalfe's 1973 PhD thesis in which he set the basis for Ethernet (which was developed by him shortly thereafter while at Xerox PARC). The other one is Ruling the root, internet governance and the taming of cyberspace by Milton L. Mueller, which talks about the development of DNS and all the issues surrounding it. I've flipped through them, and I'm already drooling. I know most if not all the topics discussed in these books, some of them very well, but sometimes it' easy to get hooked on a single element ("Hey, how cool is this proof that algorithm Z can be relied on to behave at O(log N) complexity?") and losing sight of the larger picture. "Connecting the dots" it's called these days. More comments after I've read them.

Categories: technology
Posted by diego on August 13, 2003 at 7:42 PM

search tips

These past few days I've been doing a lot more internet searching that usual, doing massive searches for references and old (old = 30 years old) research that might relate to mine, and making absolutely sure that I haven't missed anything. One important resource is the citeseer database, for scientific papers, which is truly fantastic, not just because it gives you the papers, but context for the papers, and bibliography references (in BibTeX format!). Aside from that obviously the number one resource is google (what else) but Dylan told me over the weekend a couple of google tips I didn't know. For example, you can use a tilde "~" sign before a keyword or sets of keywords for google to automatically. Furthermore, you can search for, say "~tornado -tornado" to get all the synonyms of "tornado" but not the results for "tornado" itself. How's that for useful? A full list of advanced google tips is here.

Categories: technology
Posted by diego on August 11, 2003 at 4:59 PM

bloggercon invite

I got an invitation to BloggerCon (Thanks Dave!), and although I might not be able to go (money... time... etc...) I'll definitely try to make it. What's really interesting about this conference (if I got it right!) is that it's not for techies; rather, there will be technical people but the idea is that we can look at how weblogs are being used in different areas, and talk to users. Or rather, developers can listen to the users :-). Very cool. All too often we end up handing down pronouncements from our ivory tower that have little relation to what users actually want. Specifically, how is blogging affecting different activities? How are people using blogging tools, syndication, etc, in different environments? (And one question that they might probably answer as well, even partially, is: how can we, the developers, make things easier for them?). Lots of open discussion and similarly good stuff.

Blogging has already opened up avenues of communication between developers and users that simply didn't exist before, and I think its given a new (more "real") sense to the idea of "user community" or "developer community". BloggerCon will take that one step further.

Categories: technology
Posted by diego on August 11, 2003 at 4:52 PM

apple's direction

From BusinessWeek online:

Rather than accept being a niche PC maker, Steve Jobs is transforming his baby into a high-end consumer-electronics and services company
The "lifestyle" idea merges well with Apple's approach to product development, and it might be the right time (Not that Apple invented this--Sony did--but Apple has been much better at getting its message out, while Sony's message has actually gotten a bit muddled for whatever reason. Job's strong direction probably plays a big role here).

The idea of Apple as the "BMW" of computers goes back a long way (best described in Stephenson's great essay on the history of operating systems), but now, while retaining their "upscale" exposure, they are branching out a bit. What would be the comparison then? the Pottery Barn of tech? :-)

Categories: technology
Posted by diego on August 7, 2003 at 10:49 AM

google news search --in your email

This just off the oven: Google News Alerts. Cool!

Categories: technology
Posted by diego on August 6, 2003 at 12:08 PM

how to know when you support a format

I support RSS. I support Pie/Echo/Atom.

I wrote that, I re-read it, and then I thought: How is that possible?

This has always been my position, but, honestly: it sounds a bit naive, maybe dangerously unrealistic, no?

Somehow a perception has been created, that with RSS and Pie/Echo/Atom the choice is either-or.

Well. That, to me, sounds suspiciously like "You're either with us, or against us." I reject it almost instinctively.

And still I thought: "I support RSS. I support Pie/Echo/Atom."

Brent has his own (excellent) answer to this apparent dilemma, but I needed to find mine. So, in typical geek-obsessive fashion, I thought, ok, let's define "support", and see if the underlying semantics can explain what's going on here.

"Support" for a format in this context means (in my opinion):

  • Using it (in one or more ways) in the software I develop,
  • staying up to date in discussion of its specification, and/or its evolution, and (perhaps inevitably)
  • trying contribute to those two areas as much as possible
Of course, there are other "levels" of support. One might only use the format (point one) but not have time, interest, or knowledge to contribute to it.

But I'm interested in the case where all three are present (for obvious reasons).

Making the list had the effect of removing whatever politics or perception of political problems there were. I've never been involved in all the acrimony, but I realize now that you'd have to be made of granite or some sort of expensive metal alloy not to be affected in some way. For me, it was creating a certain level of internal confusion because I supported both. This simple list helped me clearly separate what really matters from what doesn't.

Hopefully this is becoming less of an issue anyway. For example, yesterday Sam, who is largely responsible for keeping Pie/Echo/Atom on track, was also helping refine the RSS spec, and Dave was acknowledging it. The new RSS process is working!

Which is great news, because even though Pie/Echo/Atom is important, RSS is not going away. It is already widely deployed, and stable (even if there are some arguments about some finer points in the spec). It is simple. There are all sorts of toolkits available for using it. And, finally, the Pie/Echo/Atom spec is not stable yet: if, in the near future, I had to deploy a new application that uses syndication <wink> it wouldn't even be a question. It'd use RSS 2.0.

To quote Brent: "My focus remains making software that people like." A-men to that.

I support RSS. I support Pie/Echo/Atom.

And it doesn't have to be a conflict.

Categories: technology
Posted by diego on August 6, 2003 at 10:14 AM

(shudder) of cool

U2Log reports that

Holosonic Research Labs, a Boston area audio technology company, is working with U2 to design a unique audio spotlight system for the band’s next tour. Audio spotlight is a technology that allows sound to be directed at or projected against a particular location. Specifically, Holosonics is working to create a system in which the sound generated from Edge’s guitar can be “flown over” to swoop an audience.
(shudder) (eyes roll) (faints)

Categories: technology
Posted by diego on August 4, 2003 at 7:47 PM

resurrecting windows

Have a few minutes to kill? Go read Mark's how to install Windows XP in 5 hours or less. Funny, because it's true. (Btw, reading it I realized that the half-life of WinXP on my notebook is now 1.6 years. Amazing.)

Categories: technology
Posted by diego on August 4, 2003 at 7:37 PM

rss and pie/echo/atom news

CNET's Paul Festa reports on the (not)echo/pie/atom v. RSS argument. Long, long article. Little of substance, and lots of focus on the "personality issues", which is too bad. But that's how it is. (And, strange that neither Blogger or MovableType were quoted in the article even once, when as far as I'm concerned their initial support to pie/atom/whatever was a big factor in getting it started).

I've been silent lately on the topic of PAW (Pie/Atom/Whatever)--hey, PAW sounds like an interesting name! :). Mainly, I guess that I've felt, as others have expressed, a bit of frustration with the overall process--and that together with the recent release of clevercactus beta2 had the result of me stepping back a bit from public discussion on the topic, and from deeper engagement on the Wiki (Btw, maybe Shelley's comment on the 'consortium' refers to this?). Then a posting on Sam's blog (on which I commented) made me realize that I had ended up not commenting on this issue at all.

At the moment I can say this: I still contribute to the Wiki in what I can, and make my concerns known either through comments or through email, and try to propose solutions for the problems I see--which is all I can do, even though I constantly feel that I am being "passed by". I've received some replies, in other (only a few) cases I've been either ignored or shot down for no good reason. I'm still uneasy about some things, not least of which is that I can't readily identify who is responsible for the decisions that are being made (and make no mistake about it, decisions are being made). This is not to say that the Wiki shouldn't be used. It should. I know that's the nature of the Wiki, but it needs (I'll say this for the bazillionth time) a steering committee of sorts, a group that we can identify decisions with, and that will take responsibility for how and why decisions were made. It's [Wiki]+[clear direction] what's needed, not an entirely new process. Having some idea of when discussion on particular topics will begin to be wrapped up would also help it gain focus, in my opinion.

Sam replied to my comments on his entry saying that all of this was very much on his mind, so not only I'm not saying anything new here, but Sam (who deserves a some kind of medal for his stewardship of the process so far) will probably be formalizing the process incrementally as time goes by. That aside, my feeling is that stable drafts are imminent on a number of areas, and that is a Good Thing. :-)

Since RSS is also part of the topic of this entry, in the middle of the pre-release rush there was a big piece of news I didn't comment on: Dave's announcement of the move of the RSS spec to the Berkman Center for Internet & Society at Harvard Law School. Whenever control of a de-facto standard moves openly and clearly to a not-for profit organization, it's a good thing. Any qualms? Not really; to be honest, I was a bit confused about the choice of "location" for the spec, and whether it meant something, or not. I've reached the conclusion that it doesn't seem to be of consequence at the moment and that it's probably good in that it keeps the bureaucratic load to a minimum (I suspect, though, that the decision could have impact long-term if the P/A/W spec moves to the IETF or W3C). The move is unequivocally a step forward, as evidenced by recent examples of work done towards clarifying the spec. I look forward to seeing new processes emerge around it.

Categories: technology
Posted by diego on August 4, 2003 at 5:07 PM

novell buys ximian

I go out for an hour, and when I come back Novell has announced the acquisition of Ximian. Wow. Why would Novell buy Ximian at all? Why would Ximian agree to be purchased by Novell? Is this a repeat of the Wordperfect/Quattro Pro situation? Or is this something else? Oriented towards services?

In other words: What does this mean? Too early to tell. Nevertheless, interesting no?

Categories: technology
Posted by diego on August 4, 2003 at 3:52 PM

the microcontent client

An excellent article from Anil from nearly a year ago:

The microcontent client is an extensible desktop application based around standard Internet protocols that leverages existing web technologies to find, navigate, collect, and author chunks of content for consumption by either the microcontent browser or a standard web browser. The primary advantage of the microcontent client over existing Internet technologies is that it will enable the sharing of meme-sized chunks of information using a consistent set of navigation, user interface, storage, and networking technologies. In short, a better user interface for task-based activities, and a more powerful system for reading, searching, annotating, reviewing, and other information-based activities on the Internet.
Hm. Sounds familiar... :-)

Categories: technology
Posted by diego on August 2, 2003 at 11:32 AM

typepad's featureset

Rockin'!

Maybe it's the clean, elegant design of the page, but the featureset of Typepad is quite something to look at, particularly lists features (e.g., ISBN lookup, FOAF...). It even has "post scheduling" which I assume (hope?) means that you can schedule a post to happen later, just like I wanted (although I'm a little confused about "scheduling" a post for the past as the feature mentions). Hopefully some of these features will end up appearing in MovableType itself, or in the upcoming MovableType Pro.

Categories: technology
Posted by diego on August 2, 2003 at 11:22 AM

fixing SMTP

An article on the problems of SMTP, and some of the proposals to fix it.

Categories: technology
Posted by diego on August 1, 2003 at 11:52 PM

more on MS

News.com's Charles Cooper agrees with me. (Well, okay, who agrees with whom depends on your viewpoint, but I did post it first, didn't I? :-))

Categories: technology
Posted by diego on August 1, 2003 at 7:28 PM

you know you've got a minimalist UI when...

...people don't do anything after the site has loaded, thinking that the page is too empty, so something must be missing. According to this news item, this is what happened to Google:

In its early days, the company asked some focus group participants to search for information using its site. But many people, when they went to Google, did nothing for a minute or two.

When asked why, these apparent procrastinators said they were waiting for the rest of the site to load.

The solution? Put a copyright notice on the page, which people associate with "content loaded". Now that will get 'em surfin'!

Categories: technology
Posted by diego on August 1, 2003 at 10:58 AM

early-morning linking

I hadn't seen this before, and I found it while looking at referer stats (Awstats is great, but for tracking daily referers it's awful--everything gets lost in the noise). Scott, creator of the excellent Feedster (which for a very short time was called "roogle"--feedster is better :)), likes clevercactus! Thanks!

Scott also recently wrote about the increasingly ludicrous memory requirements of MS Software in this entry read the full entry... but in summary: MSN Messenger, 30 MB, Frontpage 280 megabytes... well, you get the idea. Jeez.

And, Gary has some interesting thoughts on different IMAP clients.

Categories: technology
Posted by diego on August 1, 2003 at 10:24 AM

wrong paradigm, michael

CNET's Michael Kanellos has an opinion piece on News.com currently about what will, in his mind, define the next decade (technologically speaking, that is.):

The '60s and '70s were the decades of the mainframe. The '80s made up the decade of client-server computing. The '90s were the Internet years. Now we're entering the decade of the electronic butler.
This is complete, utter nonsense. Note that the other three "trends" are networking trends, rather than whatever trend the "electronic butler" falls in (no, not agents). Clearly there are other things going on, aside from networking, and they have their own trends. In networking, the next trend is going to be true peer-to-peer... self-organization at every level (client side, server side, middleware, semantic--yes, semantic, etc). The networking layer of autonomous computing so to speak. Even if self-organization wasn't the big trend of this decade, as I think it is, "electronic butlers" still wouldn't be it. They won't be deployed in time. Let's be realistic here. We've been talking about bluetooth since 1997 or whenever, and only this year it started to get reasonably deployed. Desktop PCs don't even come with Bluetooth built-in yet. "The Age of Automation" will certainly come, but not within the next five years. People will be too concerned with things like the Playstation 3 (est. release date: 2005) to pay mind to that. I wouldn't mind, however, a cheaper, upgraded version of AIBO.

Categories: technology
Posted by diego on July 31, 2003 at 5:54 PM

browser news

[Both items originally via Erik] First, Cheah notes that Firebird 0.6.1 has been released, fixing the infamous autocomplete bug. That bug (that crashes Firebird 0.6 after a number of uses of autocomplete on web forms) was incredibly annoying and it got me to the point where I was seriously considering to ditch Firebird. Glad to dump those thoughts. Bad, bad thoughts. Bad. (Slaps thoughts).

Ok. Breathe deep. Then go download.

Then Dion talks about projects to combine Mozilla and Java. In particular the idea of using Java as a backend for XUL sounds interesting (as a cool combination at least) but the bridge between them would have to be rock solid to be useful. Something to follow anyway.

Categories: technology
Posted by diego on July 29, 2003 at 6:40 PM

cool apple-related blogs

[via Erik] Ben posted a list of some of this favorite Apple-related weblogs. Good linking!

Categories: technology
Posted by diego on July 28, 2003 at 2:11 PM

the stupid meme that wouldn't die

So as MS does apparently stupid things, their software isn't up to par, and the recurring Microsoft-doesn't-get-it theme is on the rebound. Google, in the meantime, is being hailed as the savior for all kinds of things, and is basking in glory, giving tours of the company to actors and politicians (!?) and moving to a new fancy location. Meanwhile competition for Google that is massing on other fronts is looked at as a curiosity.

Similar claims can be made for other "next-big-thing" markets, such as mobile devices.

Ohmigod, could it be the end of Microsoft?

Any of this rings any bells?

As far as I can see, this is as much a repeat of the situation in 1996 as we could get. Back then, pundits of all stripes all but declared Microsoft dead: a huge, inneficient, PC-bound company that couldn't adapt to the brave new world of the Internet. Around that time, MS scrambled to create the IE team and go after developers to react to the competitive threat created by Netscape (and how's this and this for deja vu?). But then, Netscape's advantage (like Google's or Symbian's today) seemed unnassailable. Other companies with little expertise in the area where then looking at the browser market (and who isn't looking at search and advertising today?).

The browser wars were not the first time Microsoft had demostrated that it could fight back. The previous ten years were littered with once-powerful companies that had been literally squashed: Ashton-Tate. Borland (nevermind its recent resurrection). Lotus itself (on the spreadsheet arena). WordPerfect.The difference with Netscape was that it all happened so publicly and visibly. Software markets had started to operate differently. But it didn't make much difference, not in the end, except for one thing: no one can say they haven't seen that Microsoft won't give up easily, or that, having missed a trend, they can't adapt.

Or that they can't move beyond their core markets; the idea that Microsoft can't move "beyond the PC" is ludicrous. For starters, in the space of only a few years they have carved out a decent portion of the server market. They have carved out another portion of the market for internet applications. (Let's disregard how they did it for a moment). Right now everyone seems to have forgotten about Palm for some reason, but PocketPC-based handhelds have been steadily growing, while Palm is considered a has-been. In only a couple of years, the Xbox went from being a bunch of marketing documents to the second gaming platform of the planet. In fact, Office, which everyone just seems to ignore, is a monopoly that Microsoft did not control at all as recently as eight years ago, and is currently the biggest source of revenue (and profits) for the company.

As I was saying a year ago, in a different context. They are the biggest software company in the world. They have managed software projects bigger than anyone else, with bigger deployments than anyone else, and pulled it off. They have, time and again, moved successfully into new markets, even as many, many of their attempts have failed. They have tens of thousands of really smart employees, thousands of which are millionaries, and that strangely enough keep working 12- or 14- hour days for the company. They have excellent management, and good software development processes. Somehow they maintain an internal image of themselves in which they are always the underdog, that allows them to react fiercely to threats. They have two strong monopolies, and a few weaker ones (Multimedia Encyclopedias, anyone?). They are pulling in $30 billion in annual profits, with ten billion in net income.

Oh, right, and they have fifty billion dollars of cash and short-term investments in the bank.

It doesn't mean that Microsoft is unbeatable, or anything like that. Just look at Intuit, who has successfully stopped MS for years. But early-mover advantage is not enough. The cool-factor is not enough. Being profitable isn't enough either.

So why this recurrent delusion that they have gotten slow in their old age? Who knows. Let's skip the psychobabble.

I guess that what I'm trying to say is: those who underestimate Microsoft do so at their own peril.

Categories: technology
Posted by diego on July 28, 2003 at 12:21 PM

wired's search-- in rss

I was just doing a search on Wired News' archive, and I noticed that at the top and bottom of the search results, where you get the navigation links ("next"... "back") there was a link that said "(rss)" with a link that went to an RSS version of the search (here is a slightly recursive example :)), and it even matches the position in the search. Very cool! (Btw, since wired search is powered by lycos I went to it to see if they were doing the same thing, but no. Too bad.)

Now, when is Google going to do something like that for Google News? Or for Google searches?

Categories: technology
Posted by diego on July 28, 2003 at 10:27 AM

.com viruses?

Dylan has a good question about the use of the .com extension as a Windows virus-propagation system. Considering all the stuff that's been done by virus makers just to get you to open the payload, it is a bit surprising that this hasn't been used extensively.

Categories: technology
Posted by diego on July 26, 2003 at 10:14 AM

the new HP

An interesting interview with Carly Fiorina with the News.com staff. Quote:

One of the things we have to let go of is this notion that growth in the technology industry will be driven by the next big thing, by the next "killer app" or hot box...The real big thing in technology is that all this stuff has to work together. And when all this works together, what do you have to think about? Security, liability, mobility, rich media, total cost of ownership--all these things matter.
I think that both things coexist. On the other hand, what might be the new "killer app" is something less obvious, less "clean" (ie., "a web browser") and more reliant on interconnections between different tools, usage patterns, etc. When a number of things are put together (security, mobility, low TCO, ease of use) the quality of what can be done with the tools changes.

Categories: technology
Posted by diego on July 24, 2003 at 2:01 AM

sony's new pda

Expensive, yes, very much so, but loaded with features. Very cool that it comes with both WiFi and Bluetooth built in--definitely the way of the future. (Although I think that one of the high-end HP iPaq devices was the first to do it). If Palm/Handspring can also release something along these lines (and if price drops below that of comparable PocketPC devices), maybe PalmOS won't sink into irrelevance after all.

Categories: technology
Posted by diego on July 19, 2003 at 1:15 PM

the limits of MIDP

Jamie talks about one of the great disadvantages of MIDP in accessing information present on the device itself. MIDP-based applications are completely restricted by a Sandbox, both in terms of execution and storage, which is bad (the restriction by default is ok for security reasons, but there should be a way to go beyond it): after all, part of the point of building an application for a mobile phone is that you can interact with its core functions, and those are largely centered around data. Since the MIDP doesn't allow access to JNI either, there is zero possibility for interaction.

And, as Jamie says, it is indeed strange that no JSRs are in place to deal with this. However, I remember reading somewhere (recently) that once the rollout of MIDP 2.0 is done the focus would soon shift to providing APIs to access the device's information.

Btw, my previous post on Symbian was just the first on several things that I wanted to comment on regarding mobile development. Next subject: J2ME. :)

Categories: technology
Posted by diego on July 16, 2003 at 7:15 PM

symbian's achilles' heel

Mobitopia

Recently I've been diving more into mobile development, and I've been experiencing firsthand the issues that a lot of developers are going to start facing more often as mobile development in general (and development for next-generation mobile phones in particular) becomes more pervasive.

For any new platform, in the end, that's all there is to it, isn't it? Developers. Even though platforms can (and have) rise to prominence on their strength alone, sooner or later it's third-party developers that carry it forward. Look at the Palm, that sparked a development movement that fizzled out a little bit ago as the platform itself (and its support for third-party addons) stagnated.

Developers. Developers. Developers. I remember seeing that Steve Ballmer video that made the rounds a few months ago, in which he jumped around a stage, dark circles of sweat under his armpits, screaming that word over and over into the microphone. It was funny, in a way. But it also showed that Microsoft doesn't just understand this, but that developers, or more accurately, support for third-party development for their platform, are a big priority for them. In fact, Microsoft has repeatedly and actively leveraged its developer community to achieve dominance in new markets and platforms. The best example (and the most successful case so far) was how they used the Win32s/Win32c transition to move developers into the new environment that would be presented by Windows NT. They tried this with web-downloadable applications (remember when ActiveX controls were supposed to kill Java Applets?) but that didn't quite work, in large part because downloadable applets (of any kind) weren't really as important as it seemed, thus forcing them to conquer the browser market by leveraging their Windows monopoly directly.

I remember reading that when the first project started at Microsoft to develop an answer to PalmOS, it wasn't based on Win32. On learning this, one of the holy trinity (I can't remember if it was Gates, Ballmer, or Allchin), sent a clear message to the people that were working on the software: What do you mean it isn't based on Win32?. And that was it. A competitor project, one that was based on Win32, but was too big and bloated to gain serious traction for small devices at that point, took over.

That project became Windows CE.

And so began a long curve in which the WinCE based adapted a bit to the devices, and the devices grew powerful enough to support it. Up to now, when it can run on phones.

Some time ago I had the opportunity to play with the phone that comes in the Windows Smartphone developer kit. This is a pre-release device, mind you. Not deployed. Compared to Symbian phones (supposedly) in the same category, such as the SonyEricsson P800 or the Nokia 3650, this phone is downright pathetic. It doesn't have Bluetooth. It doesn't have Infrared. It doesn't have a camera. And so on.

But what it does have is a single, relatively simple developer kit, which also integrates nicely both with other tools currently in wide use (Visual Studio) and, more importantly, with the knowledge that many developers have today. By the end of the year, most of the technologies that you can use on a PC will be available, albeit in a limited fashion, for Smartphone, including the .Net runtime. The phone comes built in with Pocket MSN Messenger, Pocket IE, and a bunch of other things. Its user interface is not easy to use, but it looks nice. It looks familiar.

For all purposes, the shift required in a developer's head to start developing for this platform is very low.

Developers that have been building applications for Windows CE/Pocket PC will have little trouble moving over to it. There will be problems, as usual, porting issues, incompatibilities, functions that are not available on the platform, etc. Sure. And, as some operators/device makers use the phone, the need for customization will grow, creating some of the problems that Symbian faces today. But they will be contained, because Microsoft excerts more control over its platform. Certainly more than Symbian does. Which is, precisely, what I wanted to talk about.

Symbian devices are already being deployed by the millions, by several operators. By all accounts, they are outselling, outmarketing, and out-everything MS Smartphones by a mile.

But development for Symbian is incredibly difficult to get started on. Why?

One word: Heterogeneity.

Each device maker that uses Symbian provides their own development tools. Nokia has one toolset. SonyEricsson has another. Motorola has another. And so on. Even for Java, were it would be expected that there would be more homogeneity.

True, it is possible to develop on one platform and then deploy in many, but testing is a nightmare. You actually have to test on different emulators, and then the devices differ from the emulators (which in some cases makes the emulator moot). Some phones have different capabilities. It's not a coincidence that Symbian is so hot on J2ME, since Java makes it easier to develop portable Symbian apps by pushing them to a portable, lowest common denominator. But if you want to develop Java applications for Symbian's J2ME MIDP 1.0 implementation, you've got to get toolkits from Sun, then from the manufacturers, and then finally test against each phone, because, hey, they don't even implement things in the same way (such as Network sockets behavior). What's worse, a J2SE application will not port over in any way to J2ME. The only commonality is syntax, and some of the basic classes. This is a minor point though, since writing an application for a radically different UI (like that on smartphones) requires major redesign anyway.

Exacerbating these problems is the fact that there isn't a single information clearinghouse for Symbian development. Information is scattered all over the place, while Microsoft's is centered around a single site for all mobile devices (although device makers will probably add their own information as well for device-specific features). What's more, in the case of Symbian the most valuable information is not found on Symbian's site; it's found on the device makers' site, such as Nokia's developer site. But Nokia has not been, historically, an organization that understand developers, or development toolkits. It understands consumers. Symbian understands developers, but Symbian can't help you if you want to develop for a Nokia 3650. Symbian provides pointers to an incredibly confusing array of choices: toolkits, IDEs, language-specific articles, etc. Here, click on this link, they say, and all will be well. And then it isn't.

This is, in my opinion, the greatest threat that Symbian faces from Microsoft. MS's ability to move developers over from other Win32 platforms, particularly PocketPC, and its understanding of developer tools and the developer community, has to be countered by Symbian in some form, or it will end up suffering the fate of UNIX in the 80's.There is time, because Symbian enjoys a huge lead in deployment, along with the goodwill of developers.

How to fix it? One way would be for Symbian to create a single SDK, and include in it a single IDE base (such as Eclipse) to create a single reference development environment implementation that includes not only their own tools, but also Sun's tools for J2ME, along with plugins for each type of phone provided by the manufacturers.

One download. One system.

That's it. The barrier would be immediately lowered.

This SDK with IDE included would come with a plugin architecture (which Eclipse already has) so that every time a manufacturer deploys a new Symbian-based device, they also provide a new plug-in with an emulator for it, as well as the additional libraries that can be used on the device. Sure, if Metrowerks or Borland want to improve on that, they can. But the basic system would exist, and Symbian development would flourish.

Then everything should be wrapped up to be accessed from a single website (including developer forums) that deals with issues for the common platform as well as device-specific forums. All managed by Symbian. The device manufacturers' site would refer developers back to it. And so knowledge could be easily obtained, and shared, with the added advantage that companies like Nokia and SonyEricsson could go back to focusing on what the do best, which is developing consumer devices and not managing developer programs.

Categories: technology
Posted by diego on July 16, 2003 at 3:50 PM

comments: to close or not to close?

A strange blog-effect I've noticed has been happening more often recently is that, if comments are left open on archived entries, people that arrive to the entry much, much later (either through a search engine, or through an old link on another weblog) will add their comments as if the topic was still ongoing. This is a problem, not only because they might check back waiting for a reply that will almost certainly never come, but also because as the number of entries with topic that could elicit discussion grows, it can easily become a burden for me to monitor them and, for example, make sure that the comments are not posting advertising or something like that.

Sometimes the comments are a note of appreciation for information given on an entry, or clearly made by people who know they probably won't get an answer since the discussion is not ongoing, but just want to add some more information. Other times though, those that post the comments almost certainly don't understand that this is a weblog, and as such it's personal. Some comments have asked more questions about the topic, others have criticized that more information isn't available on a given topic (!), others have at times asked for pricing information or wanted to purchase a device outright(!!), and so on.

One thing is for sure though: I have to think about a "comments policy", particularly for those that arrive here and don't know what a weblog is.
But policy aside, what to do? Close the comments after a period of, say, seven days? Stop using the comments altoghether and leave trackbacks? Any semi-automated scripts that anyone knows about for MovableType that can deal with this (since I'd have to begin closing the comments on more than a thousand entries)?

...

And, yes, yes: comments welcome :)

Categories: technology
Posted by diego on July 16, 2003 at 11:11 AM

and so it begins

Wow. Yahoo! has acquired Overture. Back in february, when Google bought Pyra I speculated (not that it was terribly original of me, but anyway...) that Google and the Portals (Yahoo!, MSN, etc) were on a collision course. Certainly the Pyra deal seemed to be something that would affect Yahoo, and their purchase of overture seems to me to be a declaration that the niceties are over. Things are really going to get interesting now...

Categories: technology
Posted by diego on July 14, 2003 at 4:19 PM

erik's favorite java bloggers

Erik posted a list of his top-ten favorite Java bloggers for his latest JDJ column. And what do you know... I'm one of them! Thanks Erik!

He should have counted himself too :), I doubt there's a better place to get news on almost anything tech (and especially Java and mobiles).

Categories: technology
Posted by diego on July 12, 2003 at 4:31 PM

the SCO-Sun deal

Last week I was wondering why SCO seemed to be going easy on Sun Re: their ongoing "let's get everyone who uses Linux/UNIX" saga.

Today, I got my answer. Hm.

Categories: technology
Posted by diego on July 11, 2003 at 12:15 AM

google takes over SGI campus

And this from the let's-go-shopping dept.: SGI announced a deal with Google for a lease of their Mountain View campus. Cool!

I used to live right across Highway 101 from the SGI campus, and generally went rollerblading there on the weekend. It's a really nice place, next to the Amphitheater, a small lake (windsurf!) and good trails.

By the way, aren't we supposed to be in a recession or something? Tsk, tsk, tsk... :-)

Categories: technology
Posted by diego on July 10, 2003 at 11:04 PM

redefining the term "computer virus"

In a comment to my previous entry on linux/unix viruses, Jim said:

Well that's what you get for mixing up your terminology. It used to be that "computer virus" referred to something that spread due to user action (as opposed to "worm" which does not require human intervention). Now it's just a catch-all for anything nasty that happens to a computer.

A virus, in the classic sense, cannot affect unix-like systems very well, since it's extremely rare for executables to be writable by normal users or transmitted between machines.

Worms, on the other hand, are typically spread through network services, and are not bound by normal restrictions or usual use patterns of users.

Yes, it's possible to create ELF viruses. No, it's not a problem in practice. It's valid to say that unix-like systems are resistant to viruses in the extreme, whilst acknowledging that they are still susceptible to worms. Anybody who conflates the two issues, as the author of this article did, needs to learn a thing or two.

I realized that I should clarify when I wrote the post; that is, that I think that what we call "computer virus" has evolved--then I simply forgot. Well, now it deserves its own entry. :)

Jim is technically correct as far as current terminology is concerned, but in practice I disagree: the differences we used to give to "worms", "trojans", and "viruses" no longer apply. They're all viruses. Let me explain.

I was thinking of "viruses" as the word is used in biology. Quite simply, any organism that can self-replicate, but that requires a host (host in the biological sense) to survive (as well some "function" of the host to self-replicate). The fact that we called viruses those that self-replicated through (say) EXE infection and that we call worms those that self-replicate through, say, an Apache bug, is simply a historical quirk. Mostly, in general terms, we were making the distinction between infection that required humans (ie., X sending Y an infected file, Y executing the file and thus infecting the system) from infection that didn't (like most internet worms these days, e.g., SQL Slammer). Probably one problem is that we tend to associate virus with sickness (and worms have been so far bothersome but not overly destructive), but not all viruses create problems, and in fact it's been speculated that they are an important element in allowing information flow within the gene pool of a species, and even cross-species. (Not that computer viruses are useful for this too, but wouldn't that be nice... :))

In reality, if we are going to borrow the term virus from biology, it's the worms that should be called viruses, since they can self-replicate across hosts, and in any case most if not all viruses these days have "worm" qualities mixed in. A good example are outlook viruses: they can transfer autonomously, but require human activation (running the executable file).

Probably the article's author should have made this clarification, but I think that it's about time we put the all these different categories together.

So, in my opinion: worms, viruses, trojans... They're all computer viruses, if we understand viruses as akin to their biological cousins. Some are more effective than others at self-replicating and transfer across hosts. But they all belong to the same "family" of "organisms". :)

Categories: technology
Posted by diego on July 9, 2003 at 11:46 AM

viruses in linux/unix

Interesting article on viruses in Linux/UNIX. It reminded me of something that I had forgotten: the first worm (a.k.a "mostly harmless" --to quote Douglas Adams-- networked virus) ever launched was a UNIX worm, in 1988, and it could also be argued that it was the first virus to be globally effective.

Categories: technology
Posted by diego on July 9, 2003 at 8:14 AM

on wired

A Salon article on the new book on Wired. Gotta read it. I can still remember getting that number that had a drawing of Earth and "The Long Boom" as its title. Sure, it was hype. It was still a rush.

Categories: technology
Posted by diego on July 7, 2003 at 10:16 PM

why (not)echo is important -- part 2

Last week I posted a comment with my views on why, necho is important. It was done from a more technical point of view, but the arguments ended up in what's really important anyway: the users, because in the end it's about creating useful applications, interoperability, etc.

I was reading Steve Kirk's open letter to the RSS community and I thought I'd elaborate on one the points on that I have mentioned before, (and, as I've also noted earlier Jon Udell made similar comments recently), that I think is getting lost in the discussion, and thus muddling things a little. To see the rest of my argument, please refer to the previous entry I mentioned above.

Before going on: Steve puts forward the idea of creating a standards body for the current formats. I agree that would be the ideal case, but it's plainly clear that at this moment it can't happen. I think everyone, from their side of the fence, would agree with that as well. As I've said in one of the posts I referenced above:

I think there's no doubt that Echo's happening, which is good. Also, there's no doubt that, ideally, it would be better if the process involved less infighting and was more evolutionary.
So, the reality is that we were not going anywhere before. The reasons aren't important at this point, and I don't want to dwell into a discussion that has been going on for long enough, and one in which, plainly, there can be no "winners". What matters is where we are today, and how to go forward from here.

On to what I was really going after, what I consider a point that has been largely ignored in this "compatibility" discussion.

The point is this: creating a new weblog syndication/API format will not be disruptive for users.

Why do I say this?

Consider the state of things today: weblog tools generate feeds in any of several formats (sometimes in more than one). The BBC uses one format. The New York Times uses another. When tools or sites provide information in different formats, they are, in fact, incompatible.

Do users know this? Do they care?

No.

Why?

Because tool providers have evolved to deal with a situation of multiple formats, and support all of them transparently. I know, because I've written software that works that way. All other aggregators do the same.

So the fact is that, today, users are not being directly affected by the multiplicity of formats, because we, the developers, have evolved to support a splintered market in a consistent way.

In the previous paragraph I said "users are not being directly affected" because they are being affected indirectly. How? Mainly, through the extended development time that supporting multiple formats mean for developers, and consequently less time to be able to do new things.

So how does this reality affect the necho/RSS argument?

In my opinion, it gives us a good indication of what will happen when necho is "released". Tools will start to support necho as well as RSS. The formats will coexist, just as RSS 0.91 and RDF and RSS 2.0 coexist today. Furthermore, this coexistence will be transparent, just like today. Over time, necho will, hopefully, become the standard. In the meantime, there will not be a major catastrophe of incompatibility (although we can't rule out minor problems). Eventually, some of the other formats might become less used, and will be phased out (this is something that is already happening, for example, with the transition from RSS 0.91 to RSS 2.0). And because, currently, RSS is being almost exclusively used for updates and regenerated constantly at each endpoint, there will be little if any switchover cost, again, as an example of this I put forward the transition from RSS 0.91 to RSS 2.0 that happened last year. (This is a point on which I disagree with Steve, who makes comparisons to Linux and Windows, which I think is innacurate. The cost of switching binary formats is of a completely different order than the cost of switching RSS, as I've mentioned here, and as clearly shown by the RSS 0.91 to RSS 2.0 switch, which happened late last year).

Obviously, it's on us, the developer community, to add necho support without disruption, and it's not a problem. After all, we are already doing it today, and moving most (hopefully all) tools into necho will eventually reduce work for developers in the future, allowing us to, finally, concentrate on improving the tools rather than on how to let them connect to each other.

Note: As I said in the previous entry: this is an emotional subject for many people, so I'd appreciate it if the comments, if any :), remain on-topic, that is, they talk about the text itself, or the ideas, rather than about the people that stand for/against an idea, both for comments on comments, or comments on the entry. Thanks.

Categories: technology
Posted by diego on July 6, 2003 at 5:47 PM

addicted to data?

From the New York Times: The lure of data. A flood of comments about this came to mind: the consequences of technology, or not, but more importantly feedback loops created by extra tasks put on workers, which then increases waiting times for people that need their results, which then leads to people wanting to do other things while they wait... anyway. Maybe later. Back to work. No. I wasn't multitasking. Seriously. Oops, phone rings...

Categories: technology
Posted by diego on July 6, 2003 at 11:19 AM

ozzie on mobility

[via Francois]: Ray Ozzie on Extreme Mobility:

I believe we're currently in a transition period for personal computing: from a tethered, desk-bound, personal productivity view, to one of highly mobile interpersonal productivity and collaboration, communications, coordination. We're focused right now on devices and networks because we're coming at the problem bottom-up: preoccupied by gizmos and technologies' capabilities rather than focusing on how our lives and businesses and economies and societies will be fundamentally altered.
The article also includes an interesting summary of the evolution from Notes to Groove.

Categories: technology
Posted by diego on July 4, 2003 at 9:08 AM

the problems with extreme openness

The last couple of days I've been slightly frustrated with the NotEcho process, and I've identified two problems that exist in my opinion, and I'm proposing my solutions.

Problem One, the discussion has no direction. There are multiple elements discussed by multiple groups simultaneously, and key decisions are being made by people who have never written a tool, or have their own agenda for echo, precluding input from more relevant people, such as those from Blogger or MT. Take for example the decision on how to treat comments (as entities of their own, or another type of entry). This discussion was dead for a while until I added pointers to it into a couple of more "travelled" pages. Then a few people (about 12 through the whole process) started talking about it. Then a conclusion was reached. I tried to bring out more discussion, but kept getting the same replies (things like "I don't understand why you oppose this") when I was making clear that while I understood the theoretical reason, I couldn't easily see the practical application for weblog tools today. Somehow the idea of "supporting social software" got in the way (since everything is possible for software that doesn't exist), and from there productive discussion was more difficult, although in the end I did agree that it was an acceptable solution. But even if I accepted the solution chosen, it still bothers me. Why?

A "consensus" has been announced, with a grand total of eight (yes, that's 1000 in binary) votes cast. None of the votes has been from a major weblog tool maker. (And, as far as I could see, I was the only one who had implemented an aggregator in that discussion. Not that this makes me special or something, but I do think that it changes my perspective since I have a bigger stake on a useful solution that might sacrifice generality but be easier to implement and maintain, and that will actually be used by weblog tool makers).

Am I the only one that thinks that this is a bad way of making decisions?

Proposed solution for problem one. Discussion will take place in a single place each day, or for a period of time (clearly announced in the frontpage); other parts of the wiki remain open and people can comment on them, but no conclusions can be reached on their topics until they are "lit up". Preferably, someone (I propose Sam Ruby) should be voted as "Benevolent Dictator for Life" for the project (Life here meaning v1.0) and the BDFL will be the one who calls when a consensus is reached in something. Also, the BDFL can declare an issue over if it seems that no consensus can be reached (see Problem Two), and restart the discussion at a later date.

Problem Two, many votes, in many situations, are coming from people who, while smart and well informed, have no "investment" in how practical using Notecho becomes, or whether something makes sense from a practical, rather than a theoretical, point of view. Some vote, but give no identity, which makes voting ridiculous (After all, with hundreds of updates a day, who can keep count of what's real and what's not in the Wiki?). In at least one case that I identified, a person making "serious" arguments and voting, and arguing forcefully about direction had actually started a weblog only a few days ago. Consider the current discussion (if it can be considered a discussion and not a fistfight) over XML-RPC. It quickly became "an issue" on which people who had never participated before in nEcho suddenly turned up to vote, which is ridiculous. Many of those people have not contributed anywhere else.

Proposed solution for problem twoWe need a mechanism similar what open-source groups do for allowing checkin privileges. Is this unfair? I don't think so. Apache seems to work remarkably well, doesn't it? And besides, everyone is not "equal". Can we seriously think that the votes of, say, Tim Bray, Sam Ruby, Dave Winer, or Mark Pilgrim count exactly as much as a bozo who doesn't even want to identify him/herself? I don't think so.

The NotEcho process is open, probably a bit too open. A bit less speed on decisions, fewer simultaneous discussions and better control of how "consensus" is defined, and making sure that those that vote have earned it, will be a plus for everyone, and will improve the final result.

Categories: technology
Posted by diego on July 3, 2003 at 9:34 PM

say again?

Trevor Marshall talks to an SCO VP on the trial, and what is SCO really after. A weird-feeling, unsettling article. The evasive responses of the VP in question don't just show what other things SCO has in mind, but also how ridiculous it is to try to establish derived work rights of the kind SCO is talking about. As Marshall keeps asking ("so, is BSD a problem too"? "is this clean"? "Is that clean"? he keeps getting answers such as "Well...") except, amazingly in the case of Sun. "Sun is clean" he says. Sun, whose software includes elements originally done by Bill Joy, who in turn did that work in (and in fact, was responsible for) the original BSD distribution. Sun, who also sells Linux machines.

I don't understand. Is this because of the way Sun handled its licenses with AT&T or something? Or because they didn't? Mystery.

Categories: technology
Posted by diego on July 3, 2003 at 2:43 PM

why (not)echo is important

Daniel was commenting on yesterday's surprise announcement by Blogger that they are dropping their 2.0 API in favor of NotEcho, and among other things he said:

No one has pointed to any benefit of all this work, something that can't currently be done, that will make for a compelling new feature for my/our users. Anyone? I'm not asking you to reveal your new killer feature, but to remain at essentially the same place I'm at now, I'm being asked to rewrite to support "notEcho". Where's the upside for me? I'd like to understand. No politics. Just tech. Thanks!

There have been other arguments about this elsewhere, but I thought it would be useful to give my point of view. So here go my reasons for supporting NotEcho, and endorsing it for use within the software on which I work.

No politics. Just tech.

My view is simply that of a developer, and my reasons for thinking that NotEcho is good are simple (but I'll try to be thorough in laying them out, so the explanation might not be as simple as the reasons :)); here are the most important.

clevercactus, allows both reading of RSS feeds and posting to weblogs, so I've experienced both sides (ie, "reading" and "writing") of this situation in the last few months.

Right now, to support reading/writing, a tool has to read three different weblog syndication formats, and even within the same format some components vary in how they are represented (e.g., Dates). Furthermore, since specs are ambiguous about certain things mean (e.g., link v. guid) the parsing has to be a bit too lopsided.

The blog API situation is even worse. Theoretically Blogger, MT, and Radio support the blogger API, and both MT and Radio support metaweblog. You'd think that at most I'd have to support BloggerAPI and metaweblog. But that's not the case. Again, differences in spec, ambiguities, etc., mean that the MT implementation differs from the Radio implementation. Actual code examples of the differences can be seen in the blogging APIs mini how-to I wrote up back in May, where (at Sam's suggestion actually, in the comments to my original review of blogging APIs :)). The reasons for the discrepancies may be technical, or they may very well be political, I don't know. I don't care either. At this point, this is the situation, and it's what we have to live with.

So, three tools, three implementations.

LiveJournal has its own API. Another implementation.

And so on.

Not good.

Now, according to the RoadMap, NotEcho will be supported by all weblogging tools. Blogger dropping 2.0 is excellent news: it shows their commitment to the new spec, which includes assigning people to work on it. SixApart is also involved, and it will include NotEcho support in their upcoming TypePad service as well as on MovableType. Dave has (tentatively) stated that he will recommend UserLand to support the format. LiveJournal, which currently has its own (more powerful) API will also support it. And although the process through which NotEcho is being developed at times seems to be a bit chaotic, it is clearly open to anyone (this is not a comment on whether other processes were/are open or not, simply it's an important quality of NotEcho), all of the information is in one place, with additional discussion happening on Sam's weblog. This helps immensely in involving other tool-developers and providing a channel through which they can participate, further increasing the likelihood that support will grow in the future. And the high level of (promised) support for echo today, the good foundation in which it stands for the future, and the clear "balkanization" of the current implementations, leads me to my first reason.

As an example: Clevercactus already has RSS 2.0 and metaweblog support, which allows it to interoperate with several, but not all, tools (example: LiveJournal). But by adding NotEcho support (when it's done) clevercactus will be able to interoperate with all tools using a single API, both for reading and writing. That means, across the board, a single system that works with all the others. This is an eminently good reason as far as I'm concerned.

Tech reason #1: Use of a single format across multiple tools means less code to develop, debug, and support, both for content syndication and creation. This benefits users directly in that we can spend less time spent implementing the same newPost() method in different ways and more time to provide good products for them, or work on new features.

I mentioned earlier the problems with spec-ambiguities. These are found both on the RSS spec and on the BloggerAPI/MetaWeblog specs. Before anyone goes ballistic, I'll acknowledge that there is debate on this point as well (as we can see in all the acrimony in the comments for Mark's post 'leave RSS alone' today.) But the debate only illustrates my point: whether it's actually properly spec'ed or not, or whether it's because of political problems, or a bit of both, the end result, the plain fact, is that there are different views on how things should work. This means support problems.

This point, summarized, is: even if everyone was using only a single set of specs, say, metaweblog/RSS 2.0 or even Blogger API 1.0/RDF 1.0, there would still be incompatibilities due to actual or perceived ambiguities in the specifications.

So:

Tech reason #2: A full, unambiguous specification of all components of weblogs (syndication formats, APIs, etc) will also improve interoperability; it's not enough with everyone using the same spec. Just like today you can look at a call to a POP server and decide if it's standards-compliant or not, we should be able to look at a feed, or ar the functions provided by a tool and decide if it's compliant or not. This will also save time for developers and improve the user experience.

A final tech reason for me is to properly codify (as per Tech Reasons #1 and #2) emerging practices that are simply not possible today. A specific example has to do with comments and trackbacks.

Both comments and trackbacks are growing in use as a way of enriching interaction in sites. Neither comments nor trackbacks are properly defined in RSS 2.0 (for reading) or in the metaWeblog API/Blogger API (for writing). This means that I simply have no standard way of providing client-side functionality for comments, since I can't support each person's implementation of a comments feed. Since the RSS 2.0 spec is frozen (as are the other specs) there is no way this can be supported with the current spec.

Tech reason #3: An update of the specs to include concepts currently in wide use (that is, a codification of current practice, allowing room for extensions based on it) is both important in its own right and eminently useful technically, to provide new functionality and improve creator/consumer tools to the next level.

As a postscript: reasons #1 and #2 seem to depend on everyone adopting the format, but that's not the case. Even if half of the tools mentioned in the roadmap adopt NotEcho initially, it would still be a win, since a larger number of products would be able to interoperate.

Note: This has proven to be a bit of an emotional subject lately (for a section of the blogsphere at least...), so I'd appreciate it if the comments, if any :), remain on-topic, that is, they talk about the text itself, or the ideas, rather than about the people that stand for/against an idea, both for comments on comments, or comments on the entry. Thanks.

Categories: technology
Posted by diego on July 2, 2003 at 8:44 PM

the standards debate

Jon has a summary of the state of the discussion, and I mostly agree with his conclusions. I think there's no doubt that Echo's happening, which is good. Also, there's no doubt that, ideally, it would be better if the process involved less infighting and was more evolutionary. But as Jon says (and as I've said before) that the cost of switching RSS formats won't be as high as in other cases.

Categories: technology
Posted by diego on June 30, 2003 at 8:28 PM

the theology of google

An op-ed by Thomas Friedman in today's New York Times wonders: Is Google God?.

That's right boys and girls, all of you who had, up to this point, found solace in the quiet reverie found in a church, or a synagoge, or a mosque, those of you who had hoped to reach enlightenment and be up there with Buddha, those of you who thought that God couldn't exist, because if he did he would be a masochistic, spoiled brat, those of you who sat squarely on the agnostic camp and said, "There's no data either way", and those of you who worship or live by any other beliefs, or those of who who just have no idea of what's going on on this planet, well, it seems it's all over.

Kneel before your monitor, pray to the spirit of TCP/IP to deliver you from Evil, because Google is here.

What a load of crap.

How does he reach the conclusion, you ask? Well, apparently Google+WiFi means access to "all information", anywhere, anytime. This would seem to equal God. Never mind that the least of what's happening today is "Google+WiFi", never mind that the pervasive decentralization that is happening at all levels signals a shift at many levels on how we understand (and use) computers, networks, the Internet. Nevermind that Google could easily be obsolete in five years (and Google knows that, too).

It doesn't matter that google doesn't show you "all information" but what it thinks is more relevant. It doesn't matter that google's index can easily get swamped by new information, and therefore it sucks at finding old information that is quickly overrun. Google indexes the web today, not as it was last year, and therefore it's impermanent, real-time if you will, and it has an attention span equal to that of the people that put their information on the web, which isn't saying much. That is good for many reasons. It's useful. Sure.

Google is a good company, it takes good care of its employees and it truly cares about what it does. Its main product is enormously useful. The company excels at what it does.

Does that turn google into a deity? Nah. But it does make it a good lever on which journalists and other hypemeisters can lean on to extol whatever idea they have in their head. If it's after a complimentary tour of the company, all the better--which is of course no coincidence. In the past few weeks I've read of several people that have "taken a tour" of the company's headquarters, as if it was a modern Mecca of some kind. Make no mistake, Google's appearance is carefully crafted. From its minimalist website design, to its low profile in general (and its resistance to going public). I'm sure the PR guys at google work hard to create good publicity, but this is too much (and, almost certainly, unintended: after all, how do you control delusional journalists that happen to have a big bullhorn?). Googlers reading the article must surely feel happy and proud (and rightly so), but I imagine that a tinge of uncertainty and fear must be mixed in as well.

After all, if people talk of you as God, is there anywhere to go but down? (Since you're not, and you know it).

I'll go along with the idiotic google-is-god meme just for a second, to add this: No one likes a Messiah that gets to live to old age and retire peacefully. And martyrs are always popular, even in the tech industry.

Just ask Netscape.

Categories: technology
Posted by diego on June 29, 2003 at 4:42 PM

so that's why...

A couple of days ago I linked to and commented on a Wall Street Journal article on RSS. In my comment I noted that the article was low on hype (and high on substance). It did sound kind of weird, after all, while accounts of weblogs have been showing up more and more in the mainstream media, it's still common to see the usual mantras repeated (you know... "just personal diaries".... "journalism is dead"... and so on), so it did strike me as interesting that this account was well written, and well-informed.

There's a reason: Jeremy Wagstaff, who wrote the article, has a weblog, and although it seems it might have been set up relatively recently (the archives go way back, but the old posts contain --as far as I can see-- only versions of his newspaper columns), so he actually knows what he's talking about from first-hand experience.

And here's a link to an entry he wrote related to the article, where he mentioned cactus. :)

Categories: technology
Posted by diego on June 28, 2003 at 5:08 PM

cracking the code

I was thinking about WASTE and I went to the nullsoft homepage to see what else was new. They had ripped everything out and let something that looked strangely like a code (here is a screenshot, for posterity). I looked at it for a couple of minutes, and it was a code indeed. Not hard to crack (program ended up being quite short), and there's a message inside. Very cool. The things that we find entertaining...

Categories: technology
Posted by diego on June 27, 2003 at 8:06 PM

echo and RSS

Sam responds to Dave's comments on Echo in this entry. The discussion has been quite civilized up to this point (although there were some close calls in the last few days) and it's clear where everyone stands. With Echo endorsed at this point by most of the developers in the space (with the caveat that Dave's endorsement is tentative), I think it's clear that it will be widely adopted, which is great.

Jon Udell gives a good summary of the Echo/RSS situation in his Conversation with Mr. Safe. He sticks pretty much to the line that it is a political problem, which I think is only partly true. If people can't agree to move forward a spec technically, it is indeed political, but mostly those involved claim technical reasons for that. So it's a muddle. While Jon makes a good point about the simplicity of RSS, he doesn't go further in noting that, differently than other formats, RSS is mostly used for "impermanent" things--not for archives (at least not publicly). So a migration won't be like, say, changing Ethernet for Token Ring or whatever. This is key, since it changes the cost of evolving the standard in non-compatible form. Additionally, in the RSS world "non-compatible" already has no meaning, since almost all tools have to support both RSS 0.91/2.0 and RSS 1.0 (RDF), they are already prepared to deal with another similar, yet incompatible, format. This affects the timeframe required to support the standard across the board, which will be measured in months, not years.

The other important element is that Echo will also provide, first a properly specified system (which doesn't exist today) and, two, a common weblog API based on that standard specification, which is extremely important for future evolution of distributed tools based on the evolving read/write web.

Categories: technology
Posted by diego on June 27, 2003 at 4:16 PM

unmetered internet access in Ireland? Not really

Yeah, right.

Karlin mentioned this a couple of days ago, and today the BBC is running with it as well. Note the Jupiter numbers on the article: 1% of Irish homes have broadband. That has to be on par with various group of fish that live in deep trenches of the Atlantic ocean. Or maybe they can get broadband from submarines...

This is a PR stunt, plain and simple. UTV's "unmetered" access is actually limited to 30 hours a month (for the cheapest option, the other one is 180 hours). So they are only changing the "meter" from bandwidth to time. The result of the equation remains the same: X Gigabytes for Y Euro. How is that different from Eircom giving you DSL but limiting transfer to 4gig a month? It's not. Unmetered access usually means unlimited transfer in a month for a flat fee. This is a flat fee, but not unlimited. So the typical result of unlimited (bringing down the cost of transfers to a numer low enough so that internet use becomes pervasive) doesn't happen here. The point of unlimited access is precisely that you leave the machine connected all day and the internet becomes woven into the fabric of life. UTV's offer, just like any other offer for access today in Ireland, only lets you only knit the Internet to one of your coats. Shame on the operators, and shame on the government that doesn't fix it.

Categories: technology
Posted by diego on June 27, 2003 at 12:25 PM

better than a pyramid: a pyramac!

[via Wired News] A Seventh Desktop Wonder:

It took the Egyptians hundreds of years to build the pyramids, but Kent Salas built his in six months. Plus, his glows in the dark.

Salas' Pyramac is a unique case modification -- a hand-made, translucent, pyramid-shaped Mac that glows vividly under ultraviolet light.

Cool.

Categories: technology
Posted by diego on June 26, 2003 at 6:31 PM

apple and microsoft

This Businessweek article on Apple's future as measured by MS's commitment to keeping Office for the Mac alive. I think this is a non-issue: Apple already has Safari, and the presentation software they announced a few months ago. You've got OpenOffice for the mac an other alternatives, and, hey, clevercactus runs on OS/X too. :) Office vanishing from the Mac would be a blow, but not the killer blow that the article makes it out to be. The Web (including weblogs) and messaging of all kinds are becoming more important than office apps. And then Apple is recreating itself as a "digital lifestyle" company....

Then, another take on what was mentioned in the Google/Microsoft article that I pointed to yesterday, this article from Fortune by someone who is completely unimpressed by the idea of Longhorn. It depends, really. If Longhorn is well done, a certain inflection point could be passed, and the computer will suddenly look very different. Whether that is enough not to be "bored" (considering that we can already "see" that kind of information space on the web) is another matter. But the effect on Google, and all other search engines for that matter, shouldn't be underestimated.

Categories: technology
Posted by diego on June 26, 2003 at 6:26 PM

not email: RSS!

This article in the WSJ (registration required) talks about RSS as an alternative to email. Quote:

[...] Look at it like this: E-mail is our default window on the Internet. It's where pretty much everything ends up. I have received more than 1,000 e-mails in the past week. The vast bulk of that is automated--newsletters, newsgroup messages, dispatches from databases, press releases and whatnot. The rest is personal e-mail (a pathetically small amount, I admit), readers' mail (which I love, keep sending it) and junk. While it makes some sense to have all this stuff in one place, it's hard to find what I need, and it makes my inbox a honey pot for spammers. And when I go on holiday, it all piles up. Now, what if all that automated stuff was somewhere else, delivered through a different mechanism you could tweak, search through easily, and which wasn't laced with spam? Your inbox would just be what is e-mail, from your boss or Auntie Lola.

Enter the RSS feed. RSS stands for Really Simple Syndication, Rich Site Summary or variations of the two, depending on who you talk to. It's a format that allows folk to feed globs of information -- updates to a Web site, an online journal (a Weblog, or blog), news -- to others. These feeds appear in programs called news readers, which look a bit like e-mail programs.

This also makes sense for those folk who may not subscribe to e-mail alerts, but who regularly visit any number of Web sites for news, weather, movies, village jamborees, books, garden furniture, or whatever. Instead of having to trawl through those Web sites each morning, or each week, or whenever you remember, you can add their RSS feeds to your list and monitor them all from one place.

[...]

Part of it means throwing away what we traditionally think of as "news." Corporations are beginning to sense that blogs make an excellent in-house forum for employees. Small companies have found that running a blog for their customers -- say a real-estate agent sharing news and opinions about the neighbourhood property market -- pays better than any newspaper ad. Individuals -- consultants, columnists, one-man bands -- have, through well-designed, well-maintained blogs, built a critical mass of readers, some of whom become paying customers or subscribers. Teachers are finding RSS feeds useful for channelling subject matter to classrooms and sharing material with other teachers.

Nothing new, really, but interesting in that the summary is quite on the mark, and with little hype. RSS is definitely a candidate to replace email, but it's not going to go away anytime soon. We can chip away at the edges of it though, particularly for one-to-many (not one-to-one, or many-to-many) communication.

Later: I guess my brain blocked it out at first, but I just noticed that clevercactus gets mentioned in the article as one of the tools to try to get started with RSS in the "for more information" sidebar. Cool!.

Categories: technology
Posted by diego on June 26, 2003 at 10:56 AM

google's news

Several google-related items of note. Where to begin...

Frist, An interesting article from Salon: The google backlash.

Second, Aaron talks about the google AdSense program here and finds it interesting.

Third, News.com reports on the recent moves by Microsoft and concludes that Microsoft and Google may go head to head. Well, now, that's an original conclusion. MS goes after anything that is a big market, end-user, and with low "revenue scalability requirements", and, more importantly, anything that threatens the Windows Platform. As Google does. (Anything on which you spend too much of your time does). Aside from the ultra-obvious conclusion, the article has some interesting information in it.

And, last but definitely not least, Google has release the new Google Toolbar 2.0 beta with --surprise, surprise-- a quick "BlogThis!" link. I'm sure it's only the first of many features that will tie blogs more deeply into the fabric of google. It's Blogger-only though, which isn't so great. Sidenote: The "autofill" feature sounds simple, and something the browsers already do. It's not. I'll talk more about its ramifications later...

Categories: technology
Posted by diego on June 25, 2003 at 8:47 PM

standards at internet speed

On the Echo Project (I was mentioning earlier -- that's right, the consensus for the moment seems to be that Echo is the way to go). I've been checking consistently through the day (and commenting when I can add something I think is useful) and tons of things are getting done. For example, check out this minimal entry of Echo which, as of this morning, didn't exist. :) Of course, a lot of things remain to be defined, even some items in that entry will probably change, and eventually we'll have to move on to other "missing pieces" (e.g., the EchoAPI). But it's happening.

Btw, Jon Udell has something to say about it, too.

Categories: technology
Posted by diego on June 25, 2003 at 4:42 PM

ashton tate and SCO v. IBM

Regarding the SCO/IBM lawsuit (previous comments here and here), Robert X. Cringely has (as usual) a few interesting and well informed things to add. Quote:

Ashton was a macaw that lived in the lunch room at George Tate's software company, Ashton-Tate, home of dBase II, the first successful microcomputer database. There is a lot about that long-gone company that was unusual. There was the macaw, of course, which was named for the company, not the other way around. There was George Tate, himself, who died at his desk when he was only 40, but still managed to get married two weeks later (by proxy -- please explain that one to me). And later there was Ashton-Tate's copyright infringement lawsuit against Fox Software that pretty much destroyed the company when it became clear that Ashton-Tate didn't really own its database. NASA did, which meant that Fox had as much right to dBase as did Ashton-Tate. All this came to mind this week while I was thinking (still thinking -- this story seems to never end) about the SCO versus IBM lawsuit over bits of UNIX inside Linux. There is a lot SCO could learn from the experience of Ashton-Tate.
Great article.

Categories: technology
Posted by diego on June 25, 2003 at 1:10 PM

echo or pie?

The Wiki for the "conceptual model of a log entry" that Sam Ruby started a few days ago (as I was mentioning here) has been gathering speed. After reading through most of the material I started contributing some of my thoughts today (after all, this is exactly what I wanted).

Here's an article by Tim Bray in which he talks about the effort, why it's needed, and where he'd like it to go. Another interesting piece of information in it is that Sam told him that he'd gotten permission from IBM to work on it full time, which is great news.

And, both Six Apart and Blogger support the effort.

Sam has said that he'll be discussing one topic a day on his weblog in detail. Current topic is linkage.

Finally, the name: there's an ongoing poll in the wiki. The options for the moment are Pie, and Echo. I like Echo better, plus eventually we can use something like "EchoFeed" for, well, feeds, "EchoAPI" for the API, and so on. (Echo is for now just the name of the project, nothing else).

Categories: technology
Posted by diego on June 25, 2003 at 8:34 AM

IEEE article on overlay networks

Here's an article (PDF, 190KB) I wrote which will appear in the July/August issue of IEEE Internet Computing. (IEEE Copyright Notice: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.)

As usual, comments welcome!

It's a short introduction to overlay networks and how they compare to "standard" flooding-type P2P networks (ie., Gnutella-type). Overlays are also discussed in the literature as distributed hash tables. (Because of the way they allow exact key/value pair mappings to be done over a network, and because they support basic hashtable operations: put/get/remove). It's written more for developers (or, if you will, for a general audience with technical proficiency) rather than researchers (not enough space to go in depth into the subject for that). It's quite something to write with limited space and for a subject like this one, that tends to be err... "mathematical". I end up feeling that not all the possibilities/ambiguities are explained, that sometimes in simplifying people will get the wrong idea, etc. This always happens, on any topic, on any magazine, or journal, or even when presenting for a conference (the typical 12 or 15 page limit sounds like a lot--it isn't). In the end the only way to scratch this particular "completeness" itch is to write a book.

It was an interesting experience, spanning several months: from initial draft, review, approval... a short period of quiet and then a flurry of activity in the last week or so, where we went from ugly zero-format Word document (using the Track Changes feature in word to collaborate with the editor) to nicely finished final layout-version, ready for inclusion in the magazine. The article's editor, Keri Schreiner was great to work with, and I learned a lot from the process. There are several "Editors" involved in the magazine of course, Lead Editor, Department Editor, and so on... when you think about it, it's fascinating, a process that might take say, six months in total (from idea to camera-ready copy), happening in parallel for a set of articles that will appear in a single magazine. It's a top down process mostly. It got me thinking about how it could be done in a less centralized way, which would improve feedback for all parties. I won't go into this in more detail now, though, I still have to post a follow up on the decentralized media discussion. :-)

Categories: soft.dev, technology
Posted by diego on June 24, 2003 at 8:40 PM

So long, PocketPC, hello WM2003SPocketPC

Mobitopia

So Microsoft released a new version of PocketPC and changed its name to (gulp!) "Windows Mobile 2003 Software for Pocket PC". Holy Cow. Sounds like someone wanted this one to be a Really Important Upgrade. Also, considering that only two revs ago PocketPC was Windows CE, we could get to the conclusion that their branding wasn't working. Mobile devices are proliferating in type, and this new rev of MS's handheld OS aims to combine both the PocketPC and Smartphone OSes under one brand, among other things. What's notable about this is the implied convergence that Microsoft sees as handhelds acquire phone functions, and viceversa. Note, however, that only the "Windows Mobile 2003 Software" part is what is generic to both PocketPC and Smartphone. I assume that the new Smartphone release due later this year (which will include support for the .Net compact framework, MS's clone of J2ME) will be "Windows Mobile 2003 Software for Smartphone". Nothing new as a concept, of course, but interesting since, for example Palm/Handspring seem more intent on adding phone features to palms than to support the whole range of form factors under a single "brand umbrella".

Regarding the Microsoft news, however, it's also interesting to note who are the partners with MS on this launch, and what products they are announcing. Basically PC Makers, and basically handhelds, except for Panasonic, who isn't really announcing anything.

A final note: Dell's announcement (in the article) is scary. Quote:

Dell doesn't plan to introduce a new model, but does plan to upgrade its existing Axim X5 product with the new operating system as well as offer a custom version of McAfee's VirusScan PDA (personal digital assistant) software
We've heard of a few occurrences of TXT viruses and things of the sort before, but until now I hadn't realized the havoc that could be created if you have, say, a hundred million MS-powered phones out there, loaded with personal information, maybe even e-payment information, and... full of security holes. Keep in mind that C#, which Microsoft says is as secure as Java, actually lets you access low-level routines if you want, thereby breaching the protection the VM would normally give you. This article does a good job of explaining the security model of Smartphone, and it can be summarized in two words: Code Signing. Which brings up two other words, the name of other widely deployed software whose security depends on Code Signing: Internet Explorer. Enough said. It's true that operator-controlled code signing has the potential to be more secure, since the operator could be monitoring in real-time the appearance of malicious software on the network and revoking execute privileges network-wide when that happens (as the article mentions). That assumes, however, that, a) the execution environment itself has no bugs (right...) and that b) the operators will invest in the infrastructure that this represents, both in terms of equipment and manpower. Don't hold your breath for that either. Conclusion: unless security is improved, as soon as a critical mass of Smartphones is deployed, we can expect the appearance of W32.Klez.Smartphone and similarly named e-vermin.

Categories: technology
Posted by diego on June 24, 2003 at 5:49 PM

decentralize media, but how?

In reference to my post a few days ago on "politics, media... and war", Grant said:

This is kinda where I'm at, with one important question - how does decentralised media get to the broader audience, or the "mob" as Diego refers to them? That's where it gets kinda tricky IMO.
One clarification: when I said "the mob" I wasn't talking about "the broader audience", which usually has implications of social layers, class if you will. Put it another way, I wasn't talking about intellectuals and non-intellectuals, involved or not involved, concerned or apathic, etc. Not that Grant said this, but I think it's within a hair of going in that direction.

In the post I said:

Individuals want depth and objectivity, and if necessary complexity. The mob prefers shallow, subjective, easy-to-understand soundbites.
I didn't say it explicitly, but the mob is composed of exactly those individuals. The mob is dynamic; we are all part of it at some point. We can't really escape it in a sense. There are mobs of various sizes, world-sized, country-sized, city-sized, and smaller too. But it's the ones of large size (those that comprise the elusive "markets") that influence media into pushing A or B or C until we can't take it anymore because "the public likes it". The mob has a mind of its own, composed of invidual trails, actions and interactions. The mob cannot be "reached" by media in the way we understand it, media can only reach the individuals in ways that make each individual push the mob in a particular direction. "Making the mob move" is what can create a super-hit in movies or music, but it's an indirect process that implies creating something that affects individuals in a way in which their subsequent actions will say something both individually and aligned with the mob. In a sense this mob-concept I'm referring to has more to do with "public opinion" than with "audiences". Mobs are dynamic opinion groups, decentralized, and self-organizing, although they can and do react strongly to centralized influences when the conditions are right.

Enough on the mob-concept for the moment. I'm beginning to realize that maybe it's less obvious and more abstract than I thought. I'm sure I'll come back to it at some other point. I think I didn't do a good job of explaining what I mean.

Regarding Grant's specific question ("how does decentralised media get to the broader audience"). If by "broader audience" we mean a particular category of people that rely on mass media and nothing else (which is, note, different from my mob-concept), then it's unlikely that it will happen. But if by "broader audience" we mean mob-concept-units, then my take is that it will happen as media merges, and the web, tv, etc, can be seen through one channel. Google on your TV. Your TV on your PC. The web on your fridge, and so on. So then channel-surfing is web-surfing as well, and game-surfing, and any other kind of info-navigation you could think of. Once that exists people will be exposed to new ideas, since subscribing to an RSS feed will be like switching to channel 34. (the first kind of people I mentioned in this paragraph wouldn't, though, because they rely on particular media channels and they wouldn't watch, say, anything other than NBC even if they were offered the choice, and that is usually mixed with the idea of the "broader audience" IMO). I am a bit optimistic in that I think that given choice people will embrace it, comparatively reducing the power of big media outlets.

If that ever happens though, we'll have to deal with "community loopback" effects in which people only listen to specific information streams from like-minded people, and all society ends up as islands of disconnected ideas and opinion. The myth of the Tower of Babel, Reloaded.

I'm rereading what I just wrote and I'm wondering if it's at all understandable. So many ideas at once. That's another advantage of weblogs: the whole point is that I can try again tomorrow. :)

Categories: technology
Posted by diego on June 22, 2003 at 9:52 PM

IBM, SCO, and UNIX

A couple of interesting articles on IBM/SCO/Linux-UNIX. The first one looks at how competitors (IBM's competitors, that is, SCO, barely has any market share, let alone competitors) are trying to leverage it to scare customers away from Big Blue. Also on IBM/SCO is this opinion piece from News.com, interesting mostly in that it raises some interesting questions.

Finally, another Economist article talks about the current IBM strategy, and how it's leveraging its vast R&D to deliver integrated solutions. I saw this first-hand a few years ago when I worked for IBM Research, and even then (when the strategy had been in place for only a couple of years) it was quite impressive to see how current advanced technology, ongoing research and "old" (ie. mature) technologies were combined to deliver new solutions. The only problem one could see with this strategy is that it requires ever-increasing employees, since it's human intensive (at least for the moment), something that Bill Gates famously abhorrs, but as long as IBM maintains its lead in this area this "problem" should remain contained. Besides, with IBM Research being one of the leading nanotech-research facilities in the world, it wouldn't be surprising if they ended up owning a very large, very lucrative market in, say, 10 years' time. And then, who'd care about the consultants?

Categories: technology
Posted by diego on June 22, 2003 at 3:12 PM

well formed log entries

A few days ago Sam started a wiki to discuss what constitutes a well-formed log entry. Already lots of good information and discussion in it.

Categories: technology
Posted by diego on June 21, 2003 at 3:12 PM

Google, Rules, and IPOs

Some of the changes that a Google IPO will have to adjust to, when it happens. Interesting, but I'm skeptical that suddenly everyone is going to be playing by "the rules". Looks good on paper though. :)

Categories: technology
Posted by diego on June 21, 2003 at 1:27 PM

rogue code, or just sloppy programming?

News.com reports: Mysterious Net traffic spurs code hunt:

Worm? Trojan? Attack tool? Network administrators and security experts continue to search for the cause of an increasing amount of odd data that has been detected on the Internet.

Categories: technology
Posted by diego on June 21, 2003 at 1:20 PM

waste and encryption

Regarding WASTE (weird, it happened two weeks ago and it feels like last year's news, doesn't it?), Ray Ozzie has an intriguing take (different, from, say, my name-related machinations) on why it had to be pulled out: encryption controls, and the license needed for them (or lack thereof). Entries here and here. Personally, I still think it had more to do with Nullsoft's rebelliousness and Nullsoft releasing something that AOL didn't want to let out (for whatever reason), but there might be some of those legal considerations involved as well. In any case, something to keep in mind as encryption becomes more pervasive.

Categories: technology
Posted by diego on June 20, 2003 at 8:52 PM

downloading, but not music

Dylan has an interesting take on the whole 'we'll-destroy-your-PC-if-you-download-music' theme. :)

Categories: technology
Posted by diego on June 19, 2003 at 8:27 PM

java moves - an analysis

From Fortune magazine: Sun seeks a boost from stronger Java. Interesting.

Categories: technology
Posted by diego on June 19, 2003 at 7:42 PM

what the...?

I should have a category called "flights of idiocy" for this kind of thing: Senator endorses destroying computers of illegal downloaders:

The chairman of the Senate Judiciary Committee said Tuesday he favors developing new technology to remotely destroy the computers of people who illegally download music from the Internet.

[...]

"I'm interested," Hatch interrupted. He said damaging someone's computer "may be the only way you can teach somebody about copyrights."

The senator acknowledged Congress would have to enact an exemption for copyright owners from liability for damaging computers. He endorsed technology that would twice warn a computer user about illegal online behavior, "then destroy their computer."

"If we can find some way to do this without destroying their machines, we'd be interested in hearing about that," Hatch said. "If that's the only way, then I'm all for destroying their machines. If you have a few hundred thousand of those, I think people would realize" the seriousness of their actions, he said.

Do these "lawmakers" even use computers? Do they realize the kind of damage that "destroying the computer" could do to a person's life, just to "teach them about copyrights"? Assuming that it was a "teaching" problem (which it isn't), why not improve the famously flawed education system instead? Okay, I'll stop. This kind of pronouncement doesn't even qualify for a rebuttal. (On the other hand, it should be rebutted, otherwise they will pass this legislation). Hopefully Europe's lawmakers won't get any brilliant ideas like this any time soon.

Categories: technology
Posted by diego on June 18, 2003 at 11:27 AM

blogtalk photos

Haiko has posted some really cool pictures of the Blogtalk conference -- and cool pictures of Vienna too!

Categories: technology
Posted by diego on June 18, 2003 at 1:06 AM

steve gillmor on convergence... and windows

[via Scripting News] Steve Gillmor: "The Jim Allchin Tax", or how the Windows franchise pulls back development of other products, in this case the next generation of communication/collaboration tools (this article is part three, parts one and two are here and here). Good summary of a subject that has taken more than one book (of which "Breaking Windows" is one of the best I've read). A number of familiar ideas in the article, too. :-)

Categories: technology
Posted by diego on June 16, 2003 at 10:05 AM

private space travel

A new way to escape the gloom of the IT industry. Go into space. Tiny requirement (at the moment at least): a bank account with a lot of zeros west of the decimal point. :-)

Categories: technology
Posted by diego on June 15, 2003 at 11:43 PM

javaone wrapup

News.com: an article on Scott McNealy's keynote and some of the announcements made at the conference. Specially interesting is the mobile stuff, and the HP/Dell announcement that they will ship updated JVMs with all their machines. Finally!

Categories: technology
Posted by diego on June 15, 2003 at 2:24 AM

surprise, surprise

Microsoft has announced they would halt the development of IE for the Mac. Yikes. Who knows how many thousands of developers are left in the cold by this move. I first read it in a WSJ article today, but here's a News.com article on it from yesterday, and the predictable slashdot thread. Interestingly, Microsoft justifies the move because of Apple's Safari browser, but when Apple announced Safari it was clear that they started working on it precisely in case IE for Mac was killed (something that became a more concrete prospect with the "unofficial" news that Microsoft would stop development of IE as a separate product for Windows).

Categories: technology
Posted by diego on June 14, 2003 at 2:25 PM

sun's challenges

A good Wired article on the challenges facing Sun.

Categories: technology
Posted by diego on June 13, 2003 at 8:52 PM

oracle, peoplesoft... and microsoft

An analysis and opinion piece on the Oracle bid for Peoplesoft, and why this is more about Oracle v. Microsoft than about Oracle v. JD Edwards:

The headlines from the last week shouting about the rivalry between Larry Ellison and Craig Conway overshadowed the real subtext to the struggle for PeopleSoft: the coming competition between Oracle and Microsoft for who's going to be No. 1 with enterprise customers.
In the Wall Street Journal there has been some interesting analysis (some links here and here, reg. required) of the Peoplesoft-Oracle deal particularly in how hostile takeovers have, until this case, been avoided in Silicon Valley for fear of the technical talent "fleeing" the company afterwards. The fact that this takeover offer has been made at all signals a shift in perception at least inside Oracle that this wouldn't happen or that if it happens it doesn't really matter, in both cases because the (at least enterprise) tech market is reaching a different stage. Interesting.

Categories: technology
Posted by diego on June 13, 2003 at 5:59 PM

easyjournal

EasyJournal, another weblog tool, with a seemingly mysterious "viral" feature, which seems to be the relations between you (the weblog publisher) and your friends. As far as I can see, it's to what LiveJournal does with its "friends" feature, albeit done in a silghtly different way...
Categories: technology
Posted by diego on June 12, 2003 at 11:17 AM

moore's law: the facts, not the legend

A News.com article on Moore's law, and how it is often misunderstood.

Categories: technology
Posted by diego on June 12, 2003 at 10:34 AM

blogstreet post analysis

Veer sent me a link today over IM: a new feature they've launched for Blogstreet called Blogstreet Post Analysis. Blogstreet is already a "blog crawler" that finds correlations between blogs, but BPA breaksdown those connections to the level of individual posts, and the finer-grained view clearly will be more useful when doing analysis (and then, potentially, search). Very cool.

Categories: technology
Posted by diego on June 10, 2003 at 7:39 PM

new java logo

Russ posted screenshot of the new Java logo. Nice!

Categories: technology
Posted by diego on June 9, 2003 at 2:10 AM

the rise of dell

An interesting story from News.com on Dell, and how it became the biggest vendor of PCs. Quote:

In 1994, Dell was a struggling second-tier PC maker. Like other PC makers, Dell ordered its components in advance and carried a large amount of component inventory. If its forecasts were wrong, Dell had major write-downs.

Then Dell began to implement a new business model. Its operations had always featured a build-to-order process with direct sales to customers, but Dell took a series of ingenious steps to eliminate its inventories. The results were spectacular.

The story doesn't completely oversimplify what happened which is good. (However, in these accounts, I'm always amazed that they completely ignore the errors of a company's competitors, which play a key role in winning anything). Interesting.

Categories: technology
Posted by diego on June 9, 2003 at 1:59 AM

the strange complaint that wouldn't die

Andrew Orlowski seems to be bent on "proving" that weblogs are bad for google and that they should be removed from the main index somehow. Ridiculous. Forget about the obvious technical problems in defining what is a weblog from the POV of a bot (for example, if a company is updating information frequently on a 'news' page, is that a weblog, and if so, how would google know? Because it has an RSS feed? would that remove the NYT or News.com too?)

Weblogs are what makes the web what is is today. If you remove weblogs from google, you remove the web. Think about it. We'd end up with the same static environment we had 5 years ago. Google indexes the web today. If the web today is dynamic because of weblogs, then that's how it should be. The "fact" that weblogs are "polluting" the rest of the results is because a lot of
"respected" news sources have their archives closed. Furthermore, webloggers know how to get around that when they have to; and besides, who's to say when a result is the right result? Isn't that completely context dependent? What the query "mobile phone" means for me might be diametrically different from what it means for someone else. Who's to say who is an authority on something after all, particularly on topics that are new and on which there's not a lot of information? Weblogs give you timely, updated information on things that (often) mainstream media doesn't care about. Why is that a bad thing again? And why shouldn't it be at the top of google? Isn't the context of the search what defines when a result is correct? If anything, we have to go further in providing contextual, "semantic" search, and I'm sure google and the other search engines are working on that.

Dave said, if you want to be on google, you gotta be on the web (More info here). Dave's opinions are sometimes contentious, but I think that everyone would agree that on this one he is absolutely, 100% right on the mark. Weblogs are part of the web. There isn't any way around that. And if you don't like it, well, fine, get a weblog and complain about it. :-)

Categories: technology
Posted by diego on June 8, 2003 at 11:46 PM

and interview with bruce sterling

Sterling's fiction sometimes doesn't quite convince me, but his commentary is always excellent. Here is an interview with him about this new book "Tomorrow Now: Envisioning the Next Fifty Years". Quote:

Q: But you're not anticipating what David Brin would call a transparent society?
A: David thinks this is great. David is a technological determinist. He thinks that we understand the trend and we need to hop on it. I don't have any such illusions. Just because it's the space age, it doesn't mean we're all going to end up in space.

Just because it's the atom age, it doesn't mean we'll all have a private atom-powered helicopter. Just because it's the information age, it doesn't mean we're all going to profit or be made happier. It has secondary and tertiary effects that cannot be predicted. You don't envision a phone answering machine and predict the Lewinsky scandal--even though one is impossible without the other.

Categories: technology
Posted by diego on June 7, 2003 at 1:49 PM

blogging APIs update

It's been a little over a month since I wrote up my review of blogging APIs, which caused a bit of a stir in blogging circles (not necessarily because of its contents, but because it revived a subject that has been contentious for a while). I wrote a couple of followups (here, here, here and here) which contain links to comments and information from others in the community.

I decided to wait for a bit before commenting further, and now it's time :).

A couple of weeks ago Timothy posted some good comments on his O'Reilly weblog and that resulted in further discussion with Dave.

The discussion expanded to include some of the limitations of the XML-RPC spec, in particular those that deal with internationalization. Dave's contention is that because the API is so widely deployed it can't be changed, and people have found ways of doing i18n on their own anyway. In my opinion, however, it would be extremely useful to formalize those workarounds for the simple reason that if, anyone that wants to use i18n has to find a workaround on their own it is extremely likely that they will end up with similar, but slightly incompatible versions of the same thing (pretty much the state of blogging APIs at the moment). The same applies to any limitation, not just i18n, but i18n is a clear problem that will grow as people from the other parts of the world start using these tools (and eventually they will outnumber english-speaking (or should I say ASCII-speaking?) users).

Point in fact: At the moment the weblog posting feature of clevercactus implements Blogger and Metaweblog, along with a couple of MovableType API extensions for dealing with categories. For both Radio and MT I can use the same method call from MetaWeblog to create posts. But, I'm not dealing properly with i18n, which MT supports. So now I will be forced to completely splinter the implementations and the slight advantage that I could see in implementing MetaWeblog for both MT and Radio vanishes, so cactus will be better off using the tool's specific API in each case. Which brings us back to the need of having a single API that is extensible and, preferably, transport dependent (ie., the API has been defined without relying on XML-RPC, SOAP or REST, or anything else, and there are several "transport-specifications" for it). After all, we are trying to define the what that is transfered, not the how.

The same applies to RSS, on which there has been some movement. Dave proposed that work should start on a common profile, Sam agreed, and good discsussion followed on his post, and then it went on from there. (Timothy has a good summary of the RSS core profile here). It seems that it is easier to get to agreement on RSS than on the APIs, which gets back to my point: RSS is a data format specification, while the APIs mix both data and transport. If data and transport could be separated in the API spec I think it would be easier to move it forward.

Categories: technology
Posted by diego on June 5, 2003 at 3:09 PM

microsoft moves

A couple of interesting news-items: a more detailed analysis of the recent "announcement" (between quotes because it was entirely unofficial) of Microsoft dumping IE as a standalone product, and of increased noises of "cooperation" between AOL and Microsoft on the IM side of things after their recent settlement.

Categories: technology
Posted by diego on June 4, 2003 at 7:49 PM

palm buys handspring!

Mobitopia

Wow! Surprising in its timing (if not in content, I guess that it had always been a possibility) because of the relatively weak strategic positions of both companies at the moment. It was announced early today before the beginning of trading. Coverage from News.com and CNNmoney. It makes sense, since handspring has been moving further out than Palm into the area of "communicators", leaving "pure" handheld organizers behind, at the moment the product lines of both companies complement each other quite well. Whether the handheld-organizer-only side will survive the current trend that integrates communication (data+voice) functionality with the organizer functionality is another matter.

Categories: technology
Posted by diego on June 4, 2003 at 2:19 PM

grade school CMS lessons

A presentation by Jon Udell given at OSCOM 2003 (here is his entry linking to it, and a follow up). Great stuff.

Categories: technology
Posted by diego on June 4, 2003 at 1:31 AM

SCO v. Linux cont'd

This Salon article has some good comment and analysis on the SCO-Linux lawsuit. More speculation about why SCO would do this (along the much-discussed lines of "they're desperate" and/or "they want to force IBM into acquiring them"), but mainly background on how the lawsuit came about and opinions from others in the community.

Categories: technology
Posted by diego on June 3, 2003 at 1:47 PM

4G coming up

Mobitopia

The Economist has an interesting article this week on "4G":

Even as “third-generation” (3G) mobile networks are being switched on around the world, a couple of years later than planned, attention is shifting to what comes next: a group of newer technologies that are, inevitably, being called 4G. More hubris from the technology-obsessed industry? Not exactly. Some 4G networks are operating already, with more on the way. A technology once expected to appear around 2005 is here now.
While, as opposed to 3G, there is no clear definition of what 4G is (as the article correctly notes), in general I've always heard of 4G considered in the context of mobile service (mobile in general, not just for mobile phones) that runs on a data backbone, and voice is considered just another kind of data to be transported, and using data transports end-to-end. Essentially, the IP backbone replacing the voice backbone of the operators which, besides from making a lot of sense (especially considering that after the bubble-burst there is a lot of backbone capacity just waiting to be used), should also prove to be more cost-effective, hopefully resulting in lower charges to end-users (note the mention in the article of flat-rate pricing).

All things considered, it makes a lot of sense for '4G' to win over 3G, particularly if the operators can use the same spectrum licenses that they have obtained for 3G. Eventually there should be enough choice so that we can use cheaper (and possibly faster) connections when possible and let the connection speed/quality degrade gracefully as the infrastructure degrades (say, as you move out of the city).

It's happening!

Categories: technology
Posted by diego on June 1, 2003 at 12:38 AM

nullsoft does it again

Just as with Gnutella, Nullsoft released something that AOL pulled out but not before the code got into the wild. Heh. I think now I understand why the name. I can imagine the nullsoft guys with this thing working and AOL totally oblivious to it, maybe even letting it go to...

yes, exactly. :)

And the name has the added advantage of having a literary reference: WASTE was the name of the secret communications network (based on the US postal system) in Pynchon's classic The Crying of Lot 49. This might be the main reason for the name, but the other one (keeping with Pynchon's conspiracy-theory style) sounds more appealing. :-)

Categories: technology
Posted by diego on May 31, 2003 at 1:26 AM

more on Microsoft-AOL

Regarding thursday's news on the Microsoft-AOL settlement, here's an interesting article from the Washington Post with more background and opinion on the deal.

I wonder: could it be that Microsoft will attempt to buy AOL? Of course, most in the industry would oppose it, but at the moment both the FCC and the Justice Department would most likely be unconcerned (the European Antitrust Commission would be another matter). Steve Case is all but gone, and those in charge of AOL-Time Warner have been itching to "undo" the merger. Mmmm...

Categories: technology
Posted by diego on May 31, 2003 at 1:17 AM

microsoft competes... with itself

Why would Microsoft lower prices of Office? How about: to compete with its own offerings. In other MS-related news, Microsoft and AOL have settled the antitrust suit for $750 million (that's what cash hoards are good for). Of course, Netscape is likely to be the first loser in all this, just as MS sets its sights on Apple (again).

This is, IMO, an incredibly short-sighted move on the part of AOL. They get IE free for seven years. And after that? What if uncle Bill cuts off the IE/Media Player spigot? Microsoft knows how to think long term. And they can afford to wait.

Not smart, AOL. Not smart at all.

Categories: technology
Posted by diego on May 30, 2003 at 12:40 AM

waste: new P2P collaboration software

Jim IM'ed me a link to this software: waste, from Nullsoft (the creators of Gnutella). Not a new concept, and the setup is a bit more tricky than it should be, but interesting nevertheless. There's also a slashdot thread on it.

Categories: technology
Posted by diego on May 29, 2003 at 2:21 PM

little poor AOL

AOL is lobbying to be exempted from restrictions placed on it back when it merged with Time Warner regarding Instant Messaging. It'll probably win: Michael Powell, currently FCC chairman, wrote on AOL's support when the initial decision was handed in (talk about good friends on the inside). AOL meanwhile is allowed to keep repeating ridiculous statements like "interoperability is not possible" when there are programs like Trillian that (somehow, probably through dark magic) connect not just to AIM, but to MSN and Yahoo too.

Categories: technology
Posted by diego on May 27, 2003 at 11:24 PM

digitalIDworld

I found this really cool site today: DigitalIDWorld (one of the main contributors is Eric unless I'm mistaken). It has a ton of interesting articles/posting on digital identity stuff, such as this one, discussing Bill Gates' comments on the subject.

Categories: technology
Posted by diego on May 27, 2003 at 2:38 PM

gibson on media and technology

If you have a few minutes check out this incredibly good speech by William Gibson to the Directors Guild of America. Quote:

What we call “media” were originally called “mass media”. technologies allowing the replication of passive experience. As a novelist, I work in the oldest mass medium, the printed word. The book has been largely unchanged for centuries. Working in language expressed as a system of marks on a surface, I can induce extremely complex experiences, but only in an audience elaborately educated to experience this. This platform still possesses certain inherent advantages. I can, for instance, render interiority of character with an ease and specificity denied to a screenwriter. But my audience must be literate, must know what prose fiction is and understand how one accesses it. This requires a complexly cultural education, and a certain socio-economic basis. Not everyone is afforded the luxury of such an education.

But I remember being taken to my first film, either a Disney animation or a Disney nature documentary (I can’t recall which I saw first) and being overwhelmed by the steep yet almost instantaneous learning curve: in that hour, I learned to watch film. Was taught, in effect, by the film itself. I was years away from being able to read my first novel, and would need a lot of pedagogy, to do that. But film itself taught me, in the dark, to view it. I remember it as a sort of violence done to me, as full of terror as it was of delight. But when I emerged from that theater, I knew how to watch film.

What had happened to me was historically the result of an immensely complex technological evolution, encompassing optics, mechanics, photography, audio recording, and much else. Whatever film it was that I first watched, other people around the world were also watching, having approximately the same experience in terms of sensory input. And that film no doubt survives today, in Disney’s back-catalog, as an experience that can still be accessed.

[...]

Much of history has been, often to an unrecognized degree, technologically driven. From the extinction of North America’s mega-fauna to the current geopolitical significance of the Middle East, technology has driven change. (That’s spear-hunting technology for the mega-fauna and the internal-combustion engine for the Middle East, by the way.) Very seldom do nations legislate the emergence of new technologies.

The Internet, an unprecedented driver of change, was a complete accident, and that seems more often the way of things. The Internet is the result of the unlikely marriage of a DARPA project and the nascent industry of desktop computing. Had nations better understood the potential of the Internet, I suspect they might well have strangled it in its cradle. Emergent technology is, by its very nature, out of control, and leads to unpredictable outcomes.

As indeed does the emergent realm of the digital. I prefer to view this not as the advent of some new and extraordinary weirdness, but as part of the ongoing manifestation of some very ancient and extraordinary weirdness: our gradual spinning of a sort of extended prosthetic mass nervous-system, out of some urge that was present around the cooking-fires of our earliest human ancestors.

He makes a point near the end that I (surprinsingly) disagree with:
I imagine that one of the things our great-grandchildren will find quaintest about us is how we had all these different, function-specific devices. Their fridges will remind them of appointments and the trunks of their cars will, if need be, keep the groceries from thawing. The environment itself will be smart, rather than various function-specific nodes scattered through it. Genuinely ubiquitous computing spreads like warm Vaseline. Genuinely evolved interfaces are transparent, so transparent as to be invisible.
I don't think we are entering into an era of less function-specific devices. Rather, we are entering an era of even more "specificity", but with greater interoperability. What is on the fridge's door that reminds you of an appointment doesn't come built in with the fridge, it's a PDA-fridge-magnet (let's ignore the problems of magnetic fields on electronics for a minute). And you can take it off and carry it around. Or put it in the car, and it always stays sync-ed with your main data store, which is a black box hidden somewhere, subwoofer-style.

Function-specific, transparent, pervasive, energy-efficient, data-aware, people-aware, context-aware, portable (if possible), fully interoperable. That's, IMO, where things are going.

Categories: technology
Posted by diego on May 27, 2003 at 1:01 PM

it's the people, stupid

Clay Shirky on Social Structure in Social Software. Really interesting discussion. Quote:

Group rules like Robert's Rules of Order protect groups from falling into these patterns and thus protect the group from itself. Clay gives an example of BBS systems in the 1970's that started out as "open access" and "freedom of speech" were "overrun" by teenage boys who wanted to talk about bathroom jokes, sex, etc. The group didn't have enough structure to fend off these "attacks" on the group. This was a social issue, not a technological issue. "An attack from within is the pattern that matters."
If 'social software' is to be anything except a buzzword, we'll need to create tools that let the group "defend" against these kinds of "attacks" on its social structure. And easily, not by requiring the user to press Ctrl+Alt+W+X+F1 while standing on one foot and looking west. Spam is the same kind of threat to e-mail (except that email is a world-wide, decentralized social network).

And speaking of spam. The amount of spam I've been getting these past week or so has been astonishing, which has led me to give even more thought to good spam filtering in clevercactus. I'm getting something like 150 messages a day. Okay, maybe half of those are worms (that new "support@microsoft.com" worm that I started seeing about a week ago). Since I'm not the only one, there must be something else going on... some kind of quarterly release of new spam lists or something to these lowlifes (or is that "lowlives"?) that have nothing better to do than destroy our ability to work by hijacking our communication channels for idiotic offers for viagra and such crap. Scott had good entry on the subject last month. And make no mistake, as with social software, this is not a technology problem, it's a people problem. Or a people "feature," depending on your viewpoint. :)

Categories: technology
Posted by diego on May 27, 2003 at 12:56 PM

bloggers meeting in dublin

Karlin organized an Irish bloggers meeting:

anyone in Ireland who blogs, any bloggers from elsewhere who happen to be around, and anyone who is interested in blogging but doesn't do it yet is welcome to join an informal (can you say, 'pub'?) gathering this coming Tuesday, June 3rd. We will meet from 8pm upstairs in the Library Bar in the Central Hotel on Exchequer Street (yes, you know it, the hotel just up from the Sth. Georges Street arcade, as you turn off down Exchequer Street).

Come earlier rather than later -- try not to do that Irish thing of arriving half an hour before closing time! And please pass the word.

'Pass the word' in this case could also be called 'spread the link' :).

Categories: technology
Posted by diego on May 27, 2003 at 10:19 AM

from playstation to supercomputer

A group at the NCSA has built a supercomputer using PlayStation 2s and Linux's clustering technology for 50,000. Cool.

Categories: technology
Posted by diego on May 26, 2003 at 11:08 PM

the Economist on Ethernet

The Economist this week has a good article on the history of Ethernet, and what made it survive past other "contenders" that didn't make it, such as Token Ring, or even ATM:

The first reason is simplicity. Ethernet never presupposed what sort of medium the data would travel over, be it coaxial cable or radio waves (hence the term “ether” to describe some undefined path). That made it flexible, able to incorporate improvements without challenging its fundamental design.

Second, it rapidly became an open standard at a time when most data-networking protocols were proprietary. That openness has made for a better business model. It enabled a horde of engineers from around the world to improve the technology as they competed to build inter-operable products. That competition lowered the price. What is more, the open standard meant that engineers in different organisations had to agree with each other on revised specifications, in order to avoid being cut out of the game. This ensured that the technology never became too complex or over-designed. [...] That, coupled with the economies of scale that come from being the entrenched technology, meant that Ethernet was faster, less expensive and less complicated to deploy than rival systems.

Third, Ethernet is based on decentralisation. It lets smart “end-devices”, such as PCs, do the work of plucking the data out of the ether, rather than relying on a central unit to control the way those data are routed. In this way, Ethernet evolved in tandem with improvements in computing power—a factor that was largely overlooked by both critics and proponents when Ethernet was being pooh-poohed in the 1980s and early 1990s.

Beyond the technology, there is even a lesson for companies investing in research, albeit one learned through tears rather than triumph. Xerox failed to commercialise Ethernet, as it similarly missed exploiting other inventions created at PARC, such as the mouse and the graphical user interface. To develop Ethernet fully, Dr Metcalfe had to leave PARC and found 3Com, now a big telecommunications-component firm. The lesson may have sunk in. In January 2002 PARC was carved out as an independent subsidiary of Xerox. That allows it to explore partnerships, spin-offs and licensing agreements without having to get its parent's permission.

So true. I have the feeling the lessons of Ethernet will become more relevant as time passes and we move more and more into a fully decentralized world.

Categories: technology
Posted by diego on May 26, 2003 at 12:17 AM

moving mountains for fun and profit

Ole reviews How would you move Mount Fuji? a book on interview techniques that focuses in particular (although not exclusively) on Microsoft and their famed interview questions. Great review.

Categories: technology
Posted by diego on May 26, 2003 at 12:09 AM

weblogs and web services (or blogging APIs cont.)

Timothy Appnel on Weblogs, Web Services and the Future, on blogging APIs, RSS, and web services, that touches on some issues that were not considered at length in the original thread.

Categories: technology
Posted by diego on May 25, 2003 at 12:12 PM

autoblogging?

Remember CueCat? Microsoft Research is using it for something they call autoblogging. Interesting.

Categories: technology
Posted by diego on May 24, 2003 at 3:22 PM

more about Nextel

In a comment to my previous post on recent moves by Nextel Juan Cruz corrected me regarding the workings of the Nextel system. He said:

Diego, in Nextel's PTT system, the message is compressed, assembled as a series of packets, and then sent to the (cell) network, that routes it to the proper handheld.

The trick here is that (from the cost perspective) is that when you send a message it usually uses the network for a couple of seconds only, and because the message goes one way, there is no need to ACK that the final recipient got it, the terminal must only make sure that the local cell has it.

Is basically a store-and-forward system.

In Argentina it (and several other countries as well) it's already a cell phone operator. The only country where it does not have decent a market as a cell-phone op is in the US.

The company is split in two branches (in the US): Nextel and Nextel International. The former handles local business and the latter only international branches.

And Jim Added:
Juan Cruz is spot on, I really didn't grasp the idea of PTT at first, but if you think of it as a form of VOIP which seems to be what FasTxt are doing with their app on Symbian phones and using GSM/GPRS as the carrier, you can see some of the advantages.

This way you only have GPRS costs for the data your phone sends and receives rather than full minute by minute pricing for a long call.

Thanks for the clarification! Obviously I got carried away by my own ideas about how these things should work :) Interesting to know that what I'm talking about (direct phone-to-phone connections when possible, something akin to WiFi in Ad Hoc mode between two nodes) doesn't even exist (at least not for cell phones :)).

Categories: technology
Posted by diego on May 23, 2003 at 5:29 PM

studying for profit

From the New York Times: Computing's lost allure.

<rant>
Why is this a problem? These people were getting in just because there was the promise of easy money and whatever? You need a financial bubble to like Computer Science? You should go to college to learn, and do what you like. If the situation in the stock market (or the job market even) makes you change your mind, I say: good riddance.
</rant>

Categories: technology
Posted by diego on May 23, 2003 at 12:12 PM

looking back to see ahead

Kevin Werbach on the Post-PC, Post-Web world. Quote:

Sure, computers will become more powerful, data networks will become faster, and storage devices will become capable of holding infinitely more data. That's just the background story. Getting vast numbers of individuals and businesses online was an extraordinary achievement, but except for certain underserved communities, it's largely done.

What matters today is using all that connected power, and the standards-based software environment that rides on it, in productive ways. From enabling distributed work teams in companies to collaborate on projects to giving people rich interactive experiences that travel across different hardware and connections, these are tasks we could only think about tackling once the foundations were laid.

While we were all focused on the dot-com bubble and the subsequent bust, "yesterday" shifted.
Smart companies understand this change. IBM, you will notice, is no longer touting e-business, its code word for the Web. It has shifted its energy to next-generation developments such as Linux, grid computing and autonomic computing. Microsoft is pouring resources into post-PC and post-Web businesses, understanding that it must make significant long-term bets to prepare for the day when its traditional Windows cash cow disappears. Dell Computer is even going so far as to remove "Computer" from its name. Apple Computer is rapidly moving from an emphasis on easy Internet access to "digital lifestyle" offerings such as photo sharing and music downloads. And America Online, though struggling, knows that it needs to change from the company that gets you on the Web to the company that gets you beyond the Web.

Excellent article!

And speaking of looking back, Bob Metcalfe on 30 years of Ethernet, and Philip Greenspun comparing the evolution of wireless in the US and Europe to the development of trade barriers.

Categories: technology
Posted by diego on May 23, 2003 at 10:16 AM

more on SCO v. Linux

A couple of days ago Microsoft announced that they were going to license SCO's intellectual property. Now, given that Microsoft has as much use for UNIX intellectual property as an airplane has for an extra set of tires in the trunk, it was clear that they were doing it simply to "support" SCO's lawsuit against IBM (and threat of lawsuits against Linux users). Then open source advocate Bruce Perens replied in this opinion piece in News.com. A good read overall. Doesn't seem that SCO could be going anywhere with this (specially considering Perens' many arguments against the case), but you never know... it's crazy that this is happening, but I guess that if people can try to sue McDonald's for being overweight, anything's possible...

Categories: technology
Posted by diego on May 21, 2003 at 11:34 PM

'push to talk' goes global

Mobitopia

News.com has article on the latest moves by Nextel in introducing a more "global" version of its popular "push-to-talk" feature:

Nextel Communications is exploring whether to offer a global version of its popular "push to talk" walkie-talkie feature for cell phones, as rivals work on their own U.S.-centric versions of the technology.
Now, if the call is suddenly being relayed through satellite or a cell, that would make it less of a direct connection between phones, no? It seems that Nextel is quietly moving into the territory of cellphone operators. In the end, hybrid systems like these are what will win in the marketplace IMO. If I can make a direct call to a person in another building around the corner, why shouldn't I? And then, when the receiver is out of direct reach, the infrastructure should be used. It's just common sense.

Categories: technology
Posted by diego on May 21, 2003 at 5:00 PM

browsers galore

I switched to Mozilla in August last year and never looked back. (Although--rarely-- I have to see IE to look at some site that doesn't behave properly in Mozilla). Between the tabbed browsing and the ability to control what JavaScript does in your machine, using Mozilla is a no-brainer. It is just better. I've recently tried out Opera 7.x and Mozilla Firebird as well. Opera is very good, but there are a few things that are a problem. For one, its management of tabs feels a bit strange (at least to me, being used to the Mozilla way) but more importantly it has serious issues when doing scaling on an HTML page (changing the text size I mean). Both Mozilla and Firebird work perfectly for that, but Opera seems to keep the some of the original CSS settings and the letters overlap each other, making the text unreadable and the feature unusable. Too bad.

Firebird, on the other hand, is great. It's what I'm using right now, it seems stable and lighter than mozilla, with a less-cluttered interface (although we'll see how long that lasts). And it's fast! XUL is very good apparently. Firebird seems like a good candidate to replace Mozilla at the moment.

I've also tried out IE on the Mac (nice, but a resource hog) and Safari, which I like quite a lot. Very simple and fast. The Mac I have is not fast enough to do work on it (except finish clevercactus builds--and believe me, I've tried). Not enough memory or speed. But Safari would probably be my choice if I was using the Mac all the time (maybe I'll give Firebird on the Mac a try later, even though Mac support for Firebird is preliminary).

Since IE won "the browser wars" there has been zero innovation, particularly in IE (for example, how come IE doesn't have tabs yet?). But that's too be expected I guess. It's great to see that things are still happening (and getting better) in the browser-world, even if it's at the edges.

Categories: technology
Posted by diego on May 21, 2003 at 2:49 PM

big brother's big brother

So what else is new...

[via Wired News]:

It's a memory aid! A robotic assistant! An epidemic detector! An all-seeing, ultra-intrusive spying program!

The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person's life, index all the information and make it searchable.

[...]

While the parameters of the project have not yet been determined, [...] LifeLog could go far beyond TIA's scope, adding physical information (like how we feel) and media data (like what we read) to this transactional data.

Later: Another piece of surveillance news, courtesy of Maureen Dowd at the New York Times.

Categories: technology
Posted by diego on May 21, 2003 at 1:08 AM

googling your weblog

Dave Winer has integrated a demo of his new software into his weblog. The software integrates weblogs and google search in a new way. Nice!.

Categories: technology
Posted by diego on May 20, 2003 at 11:08 PM

mailing list migration and worms

Two seemingly unrelated topics? Yes. Definitely. Anyway... :)

I'm now in the process of migrating the old spaces-info and spaces-dev lists to clevercactus-info and clevercactus-dev respectively... I've found this document that explains how to do it but it's a major pain. It's not necessarily mailman's fault, seamless migration is always an issue (although mailman could make it easier...). Anyway, in the meantime I keep receiving copies of this new outlook worm that is making the rounds, good thing cactus is immune to it :). But it's really annoying, I've received more than a hundred copies of it over the weekend. Hopefully it will die down soon.

Categories: technology
Posted by diego on May 19, 2003 at 1:36 PM

from startup to crime... in internet time

A strange story involving the Mafia with those old-fashioned squeezes for "protection" (which sounds really strange for an Internet business, but anyway...) and an entrepreneur that apparently turned to doing the same to other companies through the Internet to solve his problems.

Categories: technology
Posted by diego on May 19, 2003 at 1:04 AM

more on social software

An interesting article by Stowe Boyd: Are you ready for social software?:

It's the opposite of project-oriented collaboration tools that places people into groups. Social software supports the desire of individuals to be pulled into groups to achieve goals. And it's coming your way.

Categories: technology
Posted by diego on May 19, 2003 at 12:58 AM

T-Mobile drops MS phone software

Mobitopia

Another setback for Microsoft: T-Mobile has announced they are dropping plans an MS-based phone:

Europe's second largest mobile phone operator T-Mobile International said on Thursday it had shelved plans to introduce a mobile phone powered by Microsoft software. "We have decided not to introduce this phone," a spokesman for T-Mobile told Reuters at the fringes of a Deutsche Telekom news conference. "For the time being, we are not pursuing this project further."

T-Mobile announced in February that it planned to introduce the Microsoft phone this summer in a move analysts saw as a blow to mobile handset industry leader Nokia. But industry sources said the phone software of the world's largest software maker still had "fundamental problems" leading to high failure rates.

Wow. That's worded quite strongly isn't it? However, we shouldn't forget that Microsoft usually doesn't come up with a good platform product until the third try (Many examples of this: Windows, PocketPC...). Still, I have to wonder how much of this is due to "fundamental problems" and how much it is due to the phone operators trying to keep Microsoft at bay...

Update: Martin clued me in a bit after I posted this that apparently T-Mobile had reversed this stance. (Here is post from Gizmodo where they went through the same report-clarification cycle). Hm. Strange...

Categories: technology
Posted by diego on May 18, 2003 at 3:41 PM

second acts

Charles Cooper on strategic makeovers by technology companies, with two main examples: Borland and Handspring. Interesting read.

Categories: technology
Posted by diego on May 18, 2003 at 1:31 AM

blogging APIs - part 2

Or is this part 3, or four, or five?

Anyway, over the past few days there have been some more comments and discussion on the state of weblog software APIs on my earlier review (comments/discussion at the bottom). The most significant comment outside of the post was a recent post by Evan on which he clarifies Blogger's side, explains why the differences between APIs, and explains his personal position on the issue. Evan's post is excellent, and a must-read.

And here go my comments...

Before commenting about the actual issue there's one thing I'd like to clarify: Evan said:

Mr. Doval's research was also poor regarding Blogger API 2.0. His assessment of it seemed to be based entirely on Ben Trott's original analysis and suggestions, and he ignored Ben's update, where he acknowledge that most of his concerns were addessed. Diego's main gripe about 2.0—"In particular, the new Blogger API, while being incompatible with 1.0, retains the method call names..."—is not true as of about five months ago.
Two things regarding this: first, I did look for the API but couldn't even find it. The link I could find in the blogger discussion group was broken (this was confirmed by Martin in one of the first comments), and I noted this in the post, but I did read more comments on it, but found Ben's the most complete (most of the comments were of the type "blogger released a new API" and little else). Second, however, and more important, is the following comment I made in the review:
The Blogger 2.0 API is also out of the race for the simple reason that access to it is for developers only; it's still in flux, not deployed, and not widely supported.
I think this is important; regardless of how good something is, if it's not available then it's not fair to compare it with things that are available, and this was also a reason why I wasn't more worried about finding out in a lot more detail about the 2.0 API. After something is released, however, it's another matter entirely.

But the flip side is that if something is not yet available, then it's a good time to comment on. While that probably doesn't have a place in a review of "available APIs" aside from the comments I made, it's probably more important to consider because Blogger is still looking for input on it. So not digging deeper for Blogger 2.0 info was a mistake in that sense, and I want to clarify my opinion comparing Blogger 2.0 with MetaWeblog, with the important caveat that Blogger 2.0 is not yet available but obviously it will soon be.

Anyway.

Okay, back for the specifics of Evan's post.

First point. Evan said:

Because MetaWeblog treats a post as a struct instead of a string, Diego's opinion was, "The Blogger API has an incredibly simple-minded view of what a weblog post is, and it is completely, and I mean completely innapropriate for what people do with weblogs today, from posting images to using audio and so on."

This assessment is interesting, given that, when it comes to posting, you can do everything through the Blogger API that you can do with the free version of Blogger, which is what a heck of a lot of people use to blog every day. Also, it is unclear to me what in MetaWeblog makes it superior for posting images and audio. (In fact, the Blogger API is what AudBlog uses to post audio to blogs.)

Evan, I can only comment on what I see in the spec and what I can test against. If there's an application out there that supports audio posting (i.e., a "media object") through the API, I was clearly flat-out wrong. But (there's always a but) there's nothing in the spec to indicate how to do this. In this sense, the MetaWeblog spec talks about the struct (although I must say the specifics of how to use it are not well exposed) and about the metaweblog.newMediaObject call (also supported by MovableType). This makes the syntax/semantics of the call clear and avoids confusion. If I want to post a picture, and blogger allows me to do it in some way, I'd certainly like to know about it, but the spec doesn't mention it.

Evan also said:

[...] it's ridiculous to say there's nothing in Blogger API 2.0 that MetaWeblog can't do. What Diego actually said, by the way, was: "To me, it looks as if Metaweblog and Blogger 2.0 are almost the same, except for naming conventions, which essentially prevents interoperability (of course, I might be missing something here)."

A closer look would make it clear that there are many more differences in the API than naming conventions. Just looking at the three methods MWAPI has, there are significant differences.

For one thing, putting the login in a struct allows for more flexible authentication mechanisms and potentially better security. It also gives a place to put the appKey, which, as I've said, is important for us (and probably any other hosted service).

The actions struct allows for more flexibility in what the blogging system does with a post, beyond publishing it or not—for example, sending an email or pinging another server. (These aren't all reflected in the current documentation. It's still in the works. But the struct provides the framework for it.)

I agree with this. Regarding the first point, I mentioned the example of the login struct in my earlier update as being clearly a good thing. Structs are good, they make the API forward extensible without affecting the main syntax. Then about Evan's second comment on "actions": It sounds really interesting, but again, if I look at the documentation that's available today regarding the actions it says:
doPublish -> Boolean
makeDraft -> Boolean
syndicate -> Boolean
allowComments -> Boolean (not implemented yet)
I don't get a sense that the actions struct is intended to be used for a lot more. A good clarification of how it is intended to be used would help enormously there; just getting the idea of what 'actions' are would be enough to get people to understand that the possibilities are many more than it seems (and I mean a comment beyond what it says in "future of the API", where it states: "New filters and actions for more powerful interactions between external clients and Blogger." and nothing more).

So, how to MetaWeblog and Blogger 2.0 compare? They are very, very similar. Blogger 2.0 is better in several senses: more extensible (because it uses structs for everything), with more options (filtering--which is by the way really cool--, and actions). Regarding the posting of media objects, Evan's comment on AudBlog implies that there is indeed a way of posting media (maybe setting it as the body?) if that's the case then both APIs support the same functionality. I would argue however that MetaWeblog's newMediaObject call is more clear and it allows a tool to post media and then include references to that media URL within the text, something I think is really important.

So if the names of the calls were the same, Blogger supported newMediaObject calls and metaweblog supported structs for parameters, I'd be happy. :-)

Finally: Evan's post makes it clear (if there were any doubts) that they do want to create a standard if possible while moving forward. He mentions standards bodies (much as I was doing a couple of days ago), which makes me hopeful. His discussion of the history of how the APIs evolved is also really interesting and further clarifies how things ended up were they are today.

What gives me even more hope that a solution could be found is the latest comment by Dave on the review: Dave said:

My wish -- a meeting with Evan, the Trotts and myself, keep it simple. We're not really that far apart. We could get unified, I think. I have keynotes at OSCOM and the Jupiter weblogs conferences coming up in the next three weeks, perfect times to make announcements. I know I must be dreaming.
That would be great. They could make it happen!

This would be good for the weblog-tool makers, not just for us developers that use the APIs. Sure, I want to use these APIs to make life easier for users and to explore the full potential of weblogs; but I will support any number of APIs I have to--within my ability to do it (as clevercactus does today, supporting Blogger, Radio and MovableType). If they are compatible, all the better for me. But a more important point, as I've said earlier is that standarization also forces new entrants (such as Microsoft or AOL) into the field, and reduces the likelyhood that they will get away with the usual "embrace, extend and extinguish" routine, which in the end creates problems for all of us.

Categories: technology
Posted by diego on May 17, 2003 at 11:43 PM

the lawsuit-fest at SCO

SCO has apparently reached the conclusion that spending money on lawyers is better than spending it on development. They started with IBM, now they're going after people that use Linux. Yuck.

Categories: technology
Posted by diego on May 16, 2003 at 9:17 PM

I'm a machead

apple.jpgI just got running an old G3 powermac (from the time when powermacs came in purple, like the original iMacs), with OS X to do some clever cactus testing and debugging, and, more importantly, to improve its L&F in the mac environment. I've got a flat panel with both digital and analog inputs; the digital input is connected to the PC and the analog to the Mac. A press of a button changes the selected input source.

Within a few minutes of playing around with the Mac, I was just amazed at how much I didn't want to go back to windows. When I did switch back, Windows just felt ... clunky.

I had a mac a few years ago (a powerbook G3) and it was nice, but it had OS 9. OS X is just amazing. The UI, both in L&F and behavior, is incredible. Maybe this is an effect that will wear off, but I hope not. :) I could live all day with a UNIX like this. Too bad the machine is not powerful enough to do development in it, but I'll certainly give it a try when I've got more time. In the meantime, I am just enjoying the experience.

OS X rocks.

Categories: clevercactus, technology
Posted by diego on May 14, 2003 at 5:54 PM

"superconductive relationships"

Ray Ozzie on social software and its consequences. Whether you agree with the buzzword or not, the trend is real, and it's here to stay.
Categories: technology
Posted by diego on May 14, 2003 at 1:42 AM

open standards -- or just open?

Note: this argument has its fair bit of circular logic built in since I am "just talking" about how we find it hard to move quickly beyond the point of "just talking". Also, it's easy for me to say "oh, people should do this and that." Pontification without responsibility for implementation is easy, and that's pretty much what I'm doing. The least I can do is be aware of that, and make clear that if anyone thinks this is worthwhile I'm willing to put my money were my mouth is and help in any way I can.

In the last couple of weeks three discussions have emerged in blogland that go to the core of the future shape of the newly emerging widespread publish/subscribe, decentralized nature of the web. It seems to me that this is, in part, a consequence of the rapid changes happening on the weblog space (that most readily identifies this new "evolution" of the web, but it's not the only example). Google buying blogger, Microsoft and AOL rumored to be about to jump in, and so on. We knew that the current patchwork of loosely compatible implementations for things we use everyday (RSS, Weblog APIs, etc) is not going to be enough to maintain an open environment once the new entrants come in, but now consensus is actually becoming critical. An agreement has to be reached on these things, and fast. Cases in point: the discussion that followed my review of weblog APIs, the recent calls from Dave to create a web-wide "RSS Profile" (with great comments by Ben from SixApart, Sam, and Don), and the latest: another discussion over REST/XML-RPC and SOAP (original posting from Dave here and here, and which of them should or should not be used.

The last item, REST v. XML-RPC/SOAP is less of an issue I think. I am not saying it's not important, I'm saying it's a matter of transport rather than high-level formats or APIs. APIs should be transport-agnostic IMO. However, this is a symptom I think of a larger problem we have: continued discussion with slow resolution of the underlying issues.

So how to get past the point of recognizing the problem and starting to come up with a solution quickly? Everyone agrees, for the most part, that some solution is desirable in all of these cases. This could be done in different ways (for example, Mark Nottingham created a wiki to collect information on the RSS Profile discussion), but mostly it requires, to me, that those creating the tools and/or the originators of the standards "sit down" for a little bit and talk it over. In private. With public input, but in private. Weblogs are great for discussion, but because they're personal it's more difficult to reach consensus IMO, which requires everyone giving in a little; weblogs being personal as they are they make it difficult to give in (it's my weblog after all!).

I've been thinking that's what needed is a sort of "standards group" along the lines of ISO or whatever, but ad-hoc, based on work done and ability to make change happen once some agreement has been reached. The big corporations settled on standards bodies as neutral ground to reach agreements of sorts (not that they always work, but they're better than the alternative). We need something similar.

Dave was saying that this is an inflection point, and I think he's right. The problem is what do we do to move forward in a unified way so that when Microsft, AOL/Time Warner and others come into the space we can present a unified front that will maintain compatibility. They should be welcome as fair competitors, not feared as monopolists, and having stable, widely used standards will help.

Anyway, just an idea. :-)

Categories: technology
Posted by diego on May 14, 2003 at 1:30 AM

those good old clunky keyboards

Scott Rosenberg recently posted an entry about the old IBM PC keyboards:
If you used an IBM PC in the 1980s -- if you used one a lot -- you came to know, and perhaps love, the feel of the old IBM keyboards. They were solid. The keys moved. They clicked. Over time, as every aspect of PC manufacturing faced the grim reaper of cost-cutting, keyboards became flimsy and disposable pieces of plastic. The touch and feel of the old IBMs became a lost artifact of the early PC era.

So I was thrilled to read (on MSNBC, via Gizmodo) that somebody is still making a contemporary equivalent of those old keyboards. They cost about $50, or five to ten times the price of today's junky keyboards, but boy, I think it's probably worth it.

I must admit I am a complete sucker for old-style "solid" IBM keyboards. IBM in general builds (IMO) some of the best keyboards in the industry, particularly for laptops: the Thinkpad line is just amazing for example. I still own a modified "original-style" IBM keyboard with a trackpoint in the center; sometimes I use it, although my working "everyday" keyboard is a microsoft natural keyboard pro, with a microsoft explorer optical mouse (before that I was a big fan of the microsoft "teardrop" mouse. MS in general makes excellent input devices. Back to the IBM keyboards though: IBM is still manufacturing keyboards and some of them are quite good, although they don't give the same "feel" as the older versions. Sometimes I wish we had a consistent way of maintaining and keeping old technology in a working state, just for historical purposes. By losing these things we are losing part of the history of how we got to where we are. A weird side effect of the digital age.
Categories: technology
Posted by diego on May 13, 2003 at 6:10 PM

the technorati API

[via Erik]: David Sifry has released the Technorati API (using REST). Very cool. Just from a research point of view the Technorati database could be a great source of information for studying network effects, structure, and so on.

Categories: technology
Posted by diego on May 12, 2003 at 11:58 AM

why the new install system for JDK 1.4.2 is bad

I've been using the JDK 1.4.2 beta for several weeks now. On one hand, I like the new Windows XP look and feel, which some glitches aside, seems to work relatively well.

But there is one piece of the JDK package that gives me shivers. I'm talking about the new installation system that comes built in with the Java Plugin.

The new system is eerily similar to Microsoft's Windows Update, Java installs a daemon program that not only runs constantly but also installs a quick access icon, ostensibly for "status". If you click on it you get a horribly designed screen with a million options, and, surprise surprise, the Java Plugin is set to update itself automatically by default. This is awful. (here is an old slashdot thread on these changes, which apparently have been brewing for a while).

I've argued before about the need for a consistent, simple way to manage Java installations on the client, and I can only wish that this is the work of some misguided intern that will be quickly squashed. Not only automatic updates are bad, the configuration screens are not appropriate for end users, and it's a Java application running on the Metal L&F, which is a ridiculous default for a control-panel console on Windows. When I say that automatic updates are bad, I'm not even talking about the possible privacy/security concerns: I'm just talking about the problems that automatic updates will create for developers and their deployments. I don't want to imagine what will begin to happen when users can suddenly update automatically to the latest JRE, sometimes without even knowing that they have. Applications will stop running, bugs that you thought you were avoiding will crop up again, new "platform" bugs will appear, and so on, and you won't have a clue of why. It will put much more strain on developers to test against everything that's out there, instead of to a well-known target.

What we need is a good way to deploy a JRE with our application invisibly, alternatively using one of the choices from the user's system. Nothing more. Nothing less.

Okay, rant finished. While on subject, here's an interesting article over at Sun's JDC on the new features of JDK 1.5 (which I talked about a while ago).

Categories: technology
Posted by diego on May 11, 2003 at 2:54 PM

and the winner for 'idiotic experiment of the year' goes to...

... these fellows that tried to "prove" that monkeys typing at a computer would not produce coherent output, going after the oft-mentioned claim that "Given an infinite number of monkeys typing on an infinite number of typewriters, they will eventually write Shakespeare." So, what did they do? They gave computers to six monkeys, for a month. But the monkeys didn't write anything. Well, colour me dazed, what a surprise. I mean, "six monkeys" is just the same as an "infinite number of monkeys", and "one month" is just like "eventually", right?. This reminds me of another "experiment" performed a while ago when researchers studied whether the claim that running in the rain you get more wet than when walking.

Some people just have too much time, and money, on their hands.

To their credit (well.. partly) one of the organizers of this "experiment" said that

[...] the project, funded by England's Arts Council rather than by scientific bodies, was intended more as performance art than scientific experiment.
What riles me up is that they don't say "this was NOT an experiment." but rather they imply that it was "just partially" one. In fact, the last quote is ridiculously stupid:
Phillips said the experiment showed that monkeys "are not random generators. They're more complex than that.

"They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."

He says that as if it was actually implied that monkeys were "random generators", or as if anyone seriously ever thought that the gedankenexperiment (as Einstein liked to call them) mandated the use of real monkeys rather than just an ideal of randomness and lack of conscious direction. I mean, the mention of an infinite number of monkeys should be enough to tip off any rational person that you're not discussing the "literary capabilities of monkeys" per se, but rather something deeper...

Anyway, I'm probably taking this too seriously. Maybe it was an article written for The Onion and by mistake it ended up in Wired.

Categories: technology
Posted by diego on May 10, 2003 at 9:46 PM

google to separate blogs from the web -- not

Evan posted a comment to my previous entry google to separate blogs from the web, squashing the article from The Guardian with a simple-but-effective: "It's B.S.". He also has a great entry on his weblog titled "Register to fix Orlowski noise problem". LOL. He notes the claim in the article (that also sounded strange to me) that the number of readers of weblogs is "statistically insignificant". I could only make sense of that if they consider the whole population of the Internet as the basis for the measure, so, several million against half a billion or something... anyway, Evan's statements are quite definitive (e.g., "as far as I know, Orlowski is full of crap"), here and on his blog, so I don't think there's much room for doubt. Hopefully the rumour-squashing will move as fast as the rumour itself!

Also in the comments to my entry Don was pointing out that one could use Bayesian filters and some simple rules to figure out "suspected blogs" (Ashcroft would like that phrase I think), which is a good point. However, a filter like that is never going to be 100%, and even if the error is less than 1%, when you're indexing 3 billion pages, that's too much. Google wouldn't be able to cope with the requests from people that have been incorrectly marked, while those that haven't been marked (and should have been) would be riding it out, something that would probably make others in the blogging community a little "unhappy". In the end it would just start an "arms race" of sorts, with bloggers looking for ways to fool the rules... totally unproductive, and neither side would really win. Bloggers would stop using Google. Google would get pissed off customers. Not a good deal for either.

Regardless: given Evan's comments, I think we can put the question to rest. At least until the next rumour :-)

Later (about 6 hours later :-)): Evan's entry in which he squashed the google-not-for-weblogs-rumour, has disappeared! In its place it says "</snip>". The conspiracy theory wheels begin to turn.... but wait, not so fast. If you look at the page source, you get the deal. The entry is still there, just commented, with the following note: "you know, in order to spread more 'Google censors Evhead' suspicions". Heh.

Categories: technology
Posted by diego on May 9, 2003 at 9:45 PM

BROADBAND!!

You know, I had a feeling it would happen like this. Today I was thinking: "I'll probably get the DSL up when I don't really need it that much anymore." By which I meant, when I get the clevercactus weblog sync working, when one of the main reasons to be online for a while disappears, there I would get DSL. (I might write later about my experience with the DSL sales dept if I have time).

So I get the config working, Blogger, Radio, MT, the whole deal, and what do you know, I get a call while I was on a break preparing dinner. From the phone exchange (yeah, the technician actually called me from there, THAT's how much I had been talking to Eircom. I had ordered DSL on April 11). Your DSL is connected, he says. I look at the modem, and see that LED that had been dead for so many days, and it is blinking a healthy green.

I couldn't believe it.

I hung up (after some pleasantries like "you don't know if you're using standard LLC on the line, do you? No, he said. Okay, I said) and sat down to make it work. Screw dinner. The modem settings were all wrong since at the beginning I thought that the problem was there. I needed new settings. I called tech support (0.74 Euro/min, 6 min call). After the usual routine in which you are treated as little more than a monkey pounding on the keyboard (being asked questions like "Is the light on? Is the ethernet cable connected?" and I'm like, "yeah, yeah, I'm telnetting into the modem right now, looking at the status!") I got him to tell me the config they require for their setup (but he gave it to me only after the stern warning "You know, after you change the configuration yourself we can't support you"--I wonder if you never touch the configuration HOW do they give support). Anyway. He gave it to me. Fired it up.

Run!

And so, now I'm connected at 512 Kbps downstream, 128 Kbps upstream. 52 Kbps downloads!!

It feels strange. I last had DSL almost two years ago when I was living in California. What am I suppossed to do with all this bandwidth now? Luckily, Eircom has already planned for that: transfer is limited to 4GB per month, so it's not really, really, full-access, always-on broadband. But it's definitely a step up from the super-expensive 56 Kbps cage I was on before.

Anyway, this will really help for all the stuff I have to do for next week's release of clevercactus.

And I'll just have to learn all over again to stop myself from selecting "disconnect" everytime I finish doing something online... :-)

Categories: technology
Posted by diego on May 9, 2003 at 8:59 PM

google to separate blogs from the web

The Guardian: Google to fix blog noise problem. How could they determine what's a blog and what's not? Looking for an RSS feed is not enough--lots of sites have RSS feeds these days. Looking at how often it's updated is not enough either. They could (if they wanted, probably followed up by massive protests from the blogging community) put everything from www.blogger.com, radio.weblogs.com and livejournal.com, into a different index, but that'd be leaving out a large part of weblogs out there. Hm.

Categories: technology
Posted by diego on May 9, 2003 at 2:42 PM

more metaweblog methods proposed

[via Dave]: Rogers Cadenhead proposes several changes to the MetaWeblog API. But...

I have read the document at least five times now, and I still don't understand. For example, Rogers includes methods that already exist in his proposal such as metaWeblog.newMediaObject or metaWeblog.getCategories. Why propose something that already exists? The metaWeblog.deregisterUser sounds strange: bloggers, especially new bloggers, who would be the main target of any tool that uses the API, will typically have one weblog. So what would be the use-scenario for this function? If there's no use scenario, then why include it? The same happens with the "deleteMultipleFiles" method. How would you delete files if there's no way of listing them? Maybe you'd assume that you can only delete what you have created from the client, but what if you are using the same tool from two different machines? And what's the use-case for this? Strange.

As far as changes to the metaWeblogAPI, I've been thinking about anything that I'd like to see, and a method like MovableType's mt.setPostCategories would be nice. The spec for the method, can be found in MT's Programmatic Interface documentation. If MetaWeblog included the method name as-is, it would be great, similar to what MovableType does in implementing MetaWeblog. Otherwise it could just be called metaWeblog.setPostCategories and maintain the rest of the interface, so that only the method name differs between call to systems.

Categories: technology
Posted by diego on May 9, 2003 at 10:37 AM

the future of handspring: "communicators"

Mobitopia

Here's an interesting interview with Jeff Hawkins, creator of the PalmPilot, co-founder of Palm, and then Handspring. He explains the difficult transition Handspring is going through in moving from "simple" handheld organizers to "communicators" (ie., devices that integrate phones and organizers) and why he thinks there's no other option.

While sometimes the idea of "synergy" is overrated (tending to create bloatware) in the case of something that integrates the functions of an organizer and a mobile phone it's a no brainer. Handspring is right to make the move. If they don't, phones like the Nokia 3650 (talked about at length in Mobitopia) and the SonyEricsson P800 will run over them like a freight train. A good read.

Categories: technology
Posted by diego on May 8, 2003 at 12:20 AM

metaweblog gets updated

Wow, that was fast! Dave commented again yesterday on my review of weblog APIs here and here (echoed by Karlin here). Thanks! And early today he "formally" released on Scripting News an update to the metaweblog API, with the methods that it was depending on from blogger, and he's now asking for feedback. This is good, since now all the APIs will be complete on their own. I was wondering yesterday if, maybe (just maybe), there hadn't been a lot of movement towards a unified API because the Blogger API and the Metaweblog API worked in tandem in most blogging tools. So (again, maybe) now that all the APIs will be complete (even the MovableType API has a set of methods that stand on their own) people can "sit down" and say, "okay, this is our functionality, the method names differ almost (note the almost) only by name, now let's choose a name that we can all live with!" So here's hoping. :)

Later:Update on the new Blogger API. I wanted to update my comments on 2.0 now that Jason posted (in the comments to my review) a link to the new API. First, Ben's concern from last December that the method names were the same as in 1.0 has been addressed, the methods now begin with "blogger2". The "login structure" looks interesting, particularly that "token" parameter that will be used "with the blogger2.secure.* module". I would imagine it's a one-way MD5 hash of some sort, either generated on the client or returned on first login, something akin to the APOP login method for POP3. Security is important, so it will be good. The idea of putting the login info in a struct makes sense, as it makes it easier to extend while maintaining backward compatibility. The structure for posts now allows to set categories (which can be managed from the API as well) and titles. Still no API for comments though, which would be good if it was there, or to upload binary/media objects which is important IMO. Once again: the more I read it, the more I wish that we everyone agree on naming and how to pass what are essentially the same parameters. The APIs are going to be really close once Blogger 2.0 is released.

Categories: technology
Posted by diego on May 7, 2003 at 3:21 PM

computers and phones... and microsoft

Back to my entry on monday about computers and phones, Bill Gates announced yesterday at the Windows Hardware Engineering Conference that:

In future versions of Windows, phone calls will be routed through the PC, while voice messages will be turned into e-mails that can be read. E-mail also will serve as an application for delivering voice messages
It's important to note that this is an announcement though... and not even of a formal product but something that will apply to "future versions of Windows". Even when Microsoft formally announces something, it still takes typically one or two years. If they do deliver though, Murph will be happy I think. :)

Categories: technology
Posted by diego on May 7, 2003 at 3:10 PM

the identity wars...

...are coming. Jeremy has a short entry that explains why, with more links. I'm quite fascinated by the idea of pervasive digital identities and where it will lead us. More on this later (at some point... I've got too much on my hands right now).

Categories: technology
Posted by diego on May 6, 2003 at 9:11 PM

and speaking of blogging APIs

On the topic... Timothy pointed me to Sam Ruby's excellent "The Evolution of Weblog APIs" which contains a lot of useful information both in understanding how the APIs interrelate, and how they came to be.

Categories: technology
Posted by diego on May 6, 2003 at 4:21 PM

blogging APIs: a mini how-to

There were a lot of good comments to my previous review of blogging APIs. I was working today on this howto and I think it will make a nice follow-up. Writing this is as much a learning experience for me as it is sharing what I've learned, so I'm sure I will add to it over time. And as usual, comments and corrections are welcome.

One thing that makes weblog APIs hard to "grasp" is that, while the specifications are complete, there are few examples, and it's not clear outright how to make a call for the various things one might do with the API (Although the specs usually do provide examples). Sam Ruby suggested using a "test case" and comparing how the request would be made for it using the different APIs. I was going to give different examples but Sam's idea is better, so I've reworked the doc. And here it is! (posting split in two to keep the main body of it off the main page)

the tools: a short intro to XML-RPC

The most basic tool necessary to use a blogging API is an HTTP library. Blogging APIs use XML-RPC which runs on top of HTTP, receiving a "call" in well-defined XML format and receiving the response using the same format. (The XML-RPC spec is here). In XML-RPC, the I/O on the function call is done through the I/O of the HTTP call, with the XML passed in the body of the page request, and the response in the "page" that the URL has returned. Because of this, XML-RPC calls tend to be stateless, but they don't need to be.

If, for example, you are using Java, you might be tempted to use the XML-RPC package from Apache. But as far as I can see (although of course I might be wrong, having overlooked something) the Apache XML-RPC package for Java doesn't allow the use of XML-RPC , just "one-dimensional" lists of parameters. The point is that whatever XML-RPC package you choose, you'll end up having to debug by looking at the request you're making, and the response you're receiving. If you don't have an XML Parser/Generator available (there are many) you will end up having to look at raw XML code. Which isn't so terrible really. :) For simplicity, I will look at how calls are made (and responses received) using the full XML request/response. Having an XML or even XML-RPC tool available "only" means that you won't have to debug XML parse/generate code.

So keeping on with this, the basic parameters that the HTTP call needs are:

  • Content-Length: the length of the XML-RPC request in bytes.
  • Content-Type: text/xml
And that's it! After that the body of the request should contain the XML for the request (so the connection has to be set up for input as well as output).

the test case

As a test case, let's consider the following: you want to create a post, assigning it to a category.

The post will have the following content:

Title: sunny days
Text: <a href="sunny.jpg">I don't like sunny days. Do you?
Picture: sunny.jpg (A picture that is included in the post)
Category: personal

This of course assumes that there is a category "personal" on your weblog. :)

preparations

Before we begin, the list of blogging APIs can be found in the review, but here they are for completeness:

My rationale for going with both Blogger and MetaWeblog only is explained in the review, and I've added the MovableType implementation since it is very close to using the MetaWeblog API for the same effect.

Posting to weblog software is usually easy: the major tools allow to post to your weblog without special modifications. Blogger, however, does require that you obtain an "application ID" before posting (it's free), which then has to be included in the request. As far as I know, other applications such as MovableType or Radio that implement the Blogger API simply ignore this value, so for testing with those you won't need it.

MovableType requires certain modules for remote posting to be activated. Since those modules are optional, you need to make sure they're installed before proceeding. The MovableType installation documentation has information on how to add those modules. It's about 5 minutes work.

making the call

So, once everything is set up, how is the call made to, first, create the post? I will show how the call is made for the Blogger API, the MetaWeblog API, and the MovableType API. Although the MT API is only implemented by MT, there's no other way to set the category from the post, so we'll have to use those extensions. I will give examples for the three best-known weblogging tools: Blogger, Radio and MovableType. Blogger implements the Blogger API only, Radio and MovableType implement both the Blogger API and the MetaWeblog API, and MT adds its own extensions as well.

The APIs are stateless; that is, they require validation on every request. Making them stateless is nice, but I think it would be better if the password was at least going encrypted over the wire--for example, with a one-way MD5 hash. But that's a subject for a different post. ;)

General note: the parameter "yourBlogID" is the weblog ID, and it is internal to the system. To obtain the blogID, you need to make a "blogger.getUsersBlogs" call and choose it from there (both the name for the blog and the id are returned). Since all tools implement the Blogger API, this call can be used across systems.

Regarding the requests: I had to do some heavy editing to make it appear properly in HTML, please let me know if there's a typo somewhere.

Posting to Blogger

The Blogger API has no concept of categories or titles, so the end result will be less complete than for Radio or MovableType. Additionally, Blogger doesn't as yet allow uploading of images, so we'll have to strip the image out of the content. Here is the request:

<?xml version="1.0" ?>
<methodCall>
 <methodName>blogger.newPost</methodName>
  <params>
   <param><value>bloggerAppKey</value></param>
   <param><value>yourBlogID</value></param>
   <param><value>yourUserName</value></param>
   <param><value>yourPassword</value></param>
   <param><value>sunny days &lt;BR&gt;&lt;BR&gt; I don't like sunny days. Do you?</value></param>
   <param><value>true</value></param>
 </params>
</methodCall>


Note: the "<BR><BR>" text is added to give the impression that "sunny days" is the title of the post. The final parameter "true" which indicates that we want this posting to be published right away. Also, note that the HTML tags in the main body "<BR>" have to be converted to "&lt;BR&gt;" to avoid confusing the XML parser.

Posting to Radio and MovableType

Most of the following code applies to both Radio and MovableType. There's a final step for dealing with categories in MovableType. I'll go over that after this main example.

Since the posting contains an image, we need to upload the image first and then use the URL returned by the blogging software in the posting--the reference "sunny.jpg" will work locally, but not remotely.

So, to upload the image we need to use the following call

<?xml version="1.0" ?>
<methodCall>
 <methodName>metaWeblog.newMediaObject</methodName>
  <params>
   <param><value>yourBlogID</value></param>
   <param><value>yourUserName</value></param>
   <param><value>yourPassword</value></param>
   <param>
   <value>
    <struct>
    <member>
     <name>name</name>
     <value>sunny.jpg</value>
    </member>
    <member>
     <name>bits</name>
     <value>[the contents of the file in Base 64 encoding]</value>
    </member>
    <member>
     <name>type</name>
     <value>image/jpeg</value>
    </member>
    </struct>
   </value>
  </param>
 </params>
</methodCall>

This call returns the URL through which the image can now be accessed. Note that the "type" value is currently (as of version 2.63) ignored by MovableType. The type is the standard MIME type of the file that's being uploaded, for binary files (such as a word processor document) it would be typical to use the "application/octet-stream" encoding.

Now that the image is uploaded, we can create the post.

<?xml version=\"1.0\" ?>
<methodCall>
 <methodName>metaWeblog.newPost</methodName>
  <params>
   <param><value>yourBlogID</value></param>
   <param><value>yourUserName</value></param>
   <param><value>yourPassword</value></param>
   <param>
    <value>
    <struct>
     <member>
      <name>title</name>
      <value>sunny days</value>
     </member>
     <member>
      <name>description</name>
      <value>&lt;a href="[url obtained through call to metaWeblog.newMediaObject"&gt;I really like sunny days.</value>
     </member>
     <member>
      <name>categories</name>
      <value><array><data>
       <value>personal</value>
      </data></array></value>
     </member>
    </struct>
    </value>
   </param>
   <param><value>true</value></param>
 </params>
</methodCall>

This call returns the "postID" of the post, which can later be used to edit it, as we'll see now.

The one difference between the Radio call and the MT call is the "categories" parameter--which MT doesn't support. MT supports only the setting of the title, description, and date created for the post, as well as a few MT extensions. However, there's a way in MT of setting a post's category, after it was published, with the following call:

<?xml version=\"1.0\" ?>
<methodCall>
 <methodName>mt.setPostCategories</methodName>
 <params>
  <param><value>postID</value></param>
  <param><value>yourUserName</value></param>
  <param><value>yourPassword</value></param>
  <param>
  <value>
   <array><data>
   </struct>
   <member>
   <name>categoryID</name>
   <value>1</value>
   <name>isPrimary</name>
   <value>true</value>
   </member>
  </struct></value>
  </data></array>
  </param>
  </params>
  </methodCall>

The postID used is the postID received from the previous call. Note here that the category used is a "categoryID" rather than the name. Radio will ignore the category if it doesn't exist. To obtain the categoryIDs in MovableType, you can call the "mt.getCategoryList" function. Radio uses a MetaWeblog API call to return the categories, "metaWeblog.getCategories", which is the only function in the MetaWeblog API that MT doesn't implement (presumably because it already includes a method for that).

conclusions
As we could see, both Radio and MovableType support more complete functionality than blogger, although MT requires an additional step. The Blogger API 2.0, not yet deployed, provides more functionality but it still doesn't provide a way to upload objects, such as images, but since it appears that they are actively working on it, that might change.

Finally: as I've said before (and others, such as Marc, have expressed similar views), having a single API to support would be a godsend. Right now, if a tool has to support all three applications there's no way except to create three different implementations, as this example shows, even for something as simple as creating a post with categories and one image. Hopefully this will change in the future! As Dave was pointing out, the new Blogger API will not be backward compatible, and the document confirms that. Since so many things are going to break anyway, it would be worthwhile, IMO, to use this opportunity to get all the APIs to agree. Of course, this would mean other applications would have to update as well, but this would be a good time to make the change and create a common base for future development.

Categories: technology
Posted by diego on May 6, 2003 at 12:16 AM

blogging APIs - the comments

There's a cool discussion happening on the comments section of my review of Blogging APIs. It sparked quite a lot of interest, and there's excellent feedback from most of the major weblog tool creators, either in-entry comments and from links, from Dave, Anil, Jason, and other notables, such as Marc and Sam. It even hit the top-50 at blogdex! Very cool.

I am now working on a follow-up, which, aside from incorporating new stuff from the comments (for example, Jason pointed to a live link of the Blogger API 2.0), will also include a mini-how to for using the APIs-- as I've said in one of the comments, while the specs are complete it's difficult to get started without "live" examples. Sam proposed a good idea: come up with a "test case" and show how it would be done on each of the APIs. It should be up later today.

Categories: technology
Posted by diego on May 5, 2003 at 7:40 PM

computers and phones

Jon Udell talks about his first experience with SpiderPhone, a nice tool that integrates phone calls into your PC environment. I've always thought that eventually clevercactus would have to do something like this in some form (following the rule of "if it's something that has to do with communication, you should be able to track it, correlate it, organize it, etc). Although the procedure to link the web-app and the phone call is a bit lopsided (as Jon describes it), at the moment there's little else that could be done. I remember when Microsoft came out with a phone of its own about 4 years ago (No link since I can't remember the product name, and looking for "microsoft phone" only brings up mobile-related stuff). Back then, I was hopeful that it would be the beginning of an integration that is long overdue, but it seems to have vanished without a trace. Good to see the idea lives on.

Categories: technology
Posted by diego on May 5, 2003 at 5:11 PM

esther dyson does blogging

A couple of weeks ago Esther Dyson started a weblog, Release 4.0. She is not only the daughter of Freeman Dyson, but she's been involved in tons of cool and/or influential things, particularly from the side of policy, for example, ICANN (which is influential, not cool :)) and the EFF (which is both). Great reading.

Categories: technology
Posted by diego on May 4, 2003 at 2:45 PM

daily news screenshots

I missed this last week somehow: Don Park has posted some screenshots and a list of planned features of new software he's starting to develop, Daily News:

Implementing just these features will be difficult, but it feels good to let some of them out. What I am striving for is a UI that is as easy to use as a real newspaper and allows people to create, modify, and share personal newspapers.
Looks excellent!

Categories: technology
Posted by diego on May 3, 2003 at 11:30 PM

a review of blogging APIs

As I was looking again at the space of remote-access APIs for weblog software (working on the XML-RPC Weblog Sync feature of clevercactus), I found that there was no side-by-side comparison of the main available APIs, or list of links of material to read. So here goes, in the hope that it will save time for others in the future. :)
Update: May 6 -- here is an introductory mini-howto I posted tonight on blogging APIs.
Another update: May 17 -- a follow up with a reply to some of Evan's comments, and some more discussion of the Blogger API 2.0.

As always, comments & corrections are most welcome!

Why an API?

First of all, why is an API necessary? The answer to this question is useful to me in evaluating how the APIs stack up against each other. In my view, the API should ideally provide programmatic access to all the functionality in the product. Weblog software is typically server side (even Radio, which is client-side, is actually running a web server in the client), and so the APIs are crucial to provide client-side access when needed. We don't live in an ideal world however, so at a minimum the API should provide:


  • The ability to create new posts (specifying not just content but dates, categories, etc.)
  • The ability to edit posts (again, including modifying not just content, but also dates, categories, etc.)
  • The ability to discover information about a user's accessible weblogs, information on how many posts there are, and so on.
  • The ability to retrieve any post, or a set of posts.

These are the main tests that an API has to pass to be truly useful as far as I'm concerned. Reading recent posts is not as important, IMO, since you can obtain those by reading the RSS feed of the blog anyway.

As a side note: these are my ideal features for a weblogging API for the usage I have in mind, which is letting cactus deal with weblogs as another data/source sink, which would in the end let weblogs replace emails in many situations, particularly for mailing lists. You would be able to create a full "backup" of your weblog in your local machine. You would be able to sync the contents of a space to a weblog, post new items, and read the feeds, creating a "roundtrip experience" on the client, while retaining the flexibility to use the weblog (both for reading and writing) for the situations when that's necessary. That is, let the user choose the appropriate tool for the appropriate time. Other applications will certainly have different needs, but I think that what I'm looking for is more or less the high end of what you'd want to do (at the moment--who knows where we'll be in six months!).

On to the APIs that are available.

What's out there: an overview

First, there seems to be a growing proliferation of blogging APIs. There are tons of open-source blogging software coming out, and some of them are also implementing their own remote access APIs. This only creates confusion and problems for developers.

One bright spot is that, so far, all APIs so far are being written on top of XML-RPC although I've seen whiffs of SOAP implementations. The fact that XML-RPC appears to be the de facto "standard" feels strange: usually it's high-level APIs which should be the standard, using any number of transports (e.g., XML-RPC, SOAP, BEEP, whatever). However, this is the situation at the moment, and it doesn't appear that there is a huge drive to change it for the foreseeable future. At a minimum, the fact that they are all being written in XML-RPC means that there's less transport headaches.

As far as what I reviewed, there seemed to be three main API contenders: the Blogger API v1.0, the MetaWeblog API, and the LiveJournal API (Those are the original source links; Userland maintains a main list of APIs here). Those three main API mirror three of the main blogging tools available: Blogger (now owned by Google as we all know), Radio and LiveJournal. MovableType, also very popular, only adds extensions on top of the Blogger and MetaWeblog APIs, as I mention below. The main developers of these APIs have been Evan (Blogger) and Dave (MetaWeblog). LiveJournal doesn't seem to identify with any particular authors per se. Ben & Mena, as developers of MovableType, also hold some sway in the community (I would think).

A developer preview of the Blogger API 2.0 was released last December, but the location given for the document now responds 'file not found' (which might be related to the Google purchase of Pyra and their subsequent adjustment of plans--I don't know). Strangely, the release was not mentioned in Evan's site, but it was mentioned in Dave's and Ben & Mena's, as well as in a zillion other blogs. Evan did respond to comments from Dave, who was asking for a move towards uniformity so that only one API has to be supported.

Of those three, the APIs to focus on are, IMO, Blogger 1.0 and MetaWeblog. All major blogging software implements both (with the notable exception of Blogger), and other, less known but still popular packages such as blojsom do as well. LiveJournal supports their own API and nothing else, and no one else seems to support the LiveJournal API. As far as I'm concerned, that makes it a closed system and so third in interest by a mile. Why would you write software that works with a single vendor? Microsoft might be able to pull that off, but that has to do more with monopolies and bulging bank accounts than with developers liking the idea. That said, the LiveJournal API is actually quite complete. If it was more widely supported, or if you really, really needed interaction with LiveJournal, it would be worth taking a second look. The Blogger 2.0 API is also out of the race for the simple reason that access to it is for developers only; it's still in flux, not deployed, and not widely supported. It seems that the API is moving closer to the functionality of the metaWeblog API, but as Ben commented there is still room for improvement and a lot of confusion built in. In particular, the new Blogger API, while being incompatible with 1.0, retains the method call names, something that's not, shall we say, ideal. Finally, MovableType has its own extensions to the APIs, but since it implements both Blogger and MetaWeblog I won't go into those. ManilaRPC is probably the oldest blogging API there is, but again since it applies only to Manila, and no one else supports it, I won't go into it in detail.

The comparison: Blogger API vs. MetaWeblog API

Before we get into the specifics, I can give one general impression: The Blogger API is a joke, and a bad one at that. This is probably to be expected since it's about two years old, but it's really disappointing to see that one of the most used weblog tools has such crappy developer support. The MetaWeblog API is much better, although it has one or two shortcomings as well (measured by the "tests" I mentioned above). The Blogger API has an incredibly simple-minded view of what a weblog post is, and it is completely, and I mean completely innapropriate for what people do with weblogs today, from posting images to using audio and so on.

To see why, look at the following lists of methods in each API:

Blogger API 1.0
blogger.newPost: Creates a new post.
blogger.editPost: Edits a given post.
blogger.getUsersBlogs: Returns information on all the blogs a given user is a member of.
blogger.getUserInfo: Authenticates a user and returns basic user information.
blogger.getTemplate: Obtains the main (or archive) index template of a given blog.
blogger.setTemplate: Sets the main (or archive) index template of a given blog.

MetaWeblog API
metaWeblog.newPost: Creates a new post.
metaWeblog.editPost: Edits a given post.
metaWeblog.getPost: Obtains a given post.
metaWeblog.newMediaObject: Uploads an image, movie, song, or other media object from a user's computer to the user's blog.

On the surface, the Blogger API appears to be more complete than MetaWeblog. This is not the case however, and there's a simple reason: the Blogger API considers the content of a post to be simply a string with no parameters whatsoever allowable. For example, when you editPost using the Blogger API, you can't change the date of the post, or do anything except modify its main text content. Yuck. Again, this comes from a simpleminded view of what a weblog post is, and one that is entirely innapropriate for the use most bloggers give to their blogging tools today.

The MetaWeblog API is, as I said, much, much better. It considers post content to be structures rather than simple strings, and the structures are elements of the item tag in RSS, so you can create essentially anything that you can read. Furthermore, it almost passes all of the "tests" I mentioned above. It doesn't have a way to query for a user's blogs or obtain information, and it doesn't let you edit templates etc. That said, all tools that I've seen implement the MetaWeblog API also implement the Blogger API, so you can use the getUserInfo call from Blogger and then use MetaWeblog for postings.

In summary: MetaWeblog API, good. Blogger API 1.0, bad. The Blogger API 2.0 apparently fixes many of these problems, let's hope it gets (re-)released soon.

In the end however, it's unlikely that any developer, particularly small developers, can choose one over the other. We will have to work with both for the foreseeable future, supporting other APIs like LiveJournal separately when necessary.

Final thoughts

A few things that will be important in the future, and are still missing: the ability to edit categories, and to specify things like RSS feed targets, among others. This of course falls into remote configuration of weblog software, rather than use of the software itself. This area hasn't been explored at all, but I think that as more people use blogging software the need to have simple, client-side configuration tools will grow.

Categories: technology
Posted by diego on May 3, 2003 at 2:29 PM

don't forget IrDA

Mobitopia

With all the hoopla surrounding bluetooth in the past few months, one could justifiably think that the idea of transparent, zero-configuration synchronization was invented a couple of years ago. But we shouldn't forget the trailblazer, and the technology that is essentially the "spiritual predecessor" to Bluetooth: IrDA.

IrDA started out precisely with the aims of Bluetooth: to provide the ability to easily connect devices in a short range network, for syncing and basic communication. At the beginning (say, 10 years ago), there was much discussion of IrDA being used on everything from printers to portable devices, and so solve the "cable problem" once and for all. Sadly, IrDA had a couple of big problems: directionality (the sender/receiver have to be more or less aligned) and its "one to one nature".

At the beginning, configuring IrDA was a nightmare, but as with everything, it got better and better. Today, I can turn on the IrDA capability in my Nokia 6210 (yes, I own a 6210, please don't laugh) and align it with the IR receptor in my notebook (a Thinkpad T21) and immediately I get a popup identifying the connection for the phone. Windows (and, I assume, Linux as well) comes built it with the capability to identify that connection as a "phone line" of sorts, allowing you to create an Internet dial-up connection just as you would with a landline. And Nokia, like other cellphone makers, provides a nice package of tools that you can install on your PC to sync the contents of the phone (including SMS), which is useful not only as backup but also to create content on the PC (say, memos, calendar entries...).

So I find it heartening that all the new phones that I've seen (Nokia's 7210, 3650, SonyEricsson's P800) come with Bluetooth and IrDA. This is brilliant: it's going to be tough for a while to find Bluetooth-enabled PCs and notebooks, while all notebooks have IrDA, and getting an IrDA receiver/transmitter for a PC is easy and, more importantly, cheap. Sure, it won't be as convenient as Bluetooth, but it's still useful.

So, here's hoping that new devices, even ones that are on the pipeline now, will still support IrDA. It will still be useful for the next couple of years, as Bluetooth becomes more widely deployed, and a common built-in option in new systems.

Categories: technology
Posted by diego on May 2, 2003 at 7:58 PM

anonymous blogging

This is interesting: Invisiblog.

"invisiblog.com lets you publish a weblog using GPG and the Mixmaster anonymous remailer network. You don't ever have to reveal your identity - not even to us. You don't have to trust us, because we'll never know who you are."
As far as anonymity goes, this is probably the furthest you can go (okay, if, as they mention in the site, you were using Freenet then it would be really, really difficult to trace the origin of a packet, let alone of a post.

That said, I have also been thinking about PKI in the context of blogging, and what it could do in terms of allowing or disallowing comments or even referrers that are not from sources that authenticate themselves properly. A few months ago there was a surge of blog-spam, both on comments and on referrers (I got hit by a few of both), but it has subsided. If it reemerges, we might have to find a way to deal with it, just as with email, and 'signed' posts, or 'signed links' might be one part of the solution.

Categories: technology
Posted by diego on May 2, 2003 at 4:10 PM

more on spam

Continuing yesterday's thread of links, an excellent article by Brad Templeton: Reflections on the 25th anniversary of spam.

Categories: technology
Posted by diego on May 1, 2003 at 11:45 PM

sms and mms--not

Jim has a great posting on Mobitopia today about SMS and MMS-- and why they might be going the way of the 8-track tape.

Since we're more or less in topic: I would have really liked to go to the Symbian exposium, not just for the conference itself, but to meet the other Mobitopians. Too much work on the cactus side of things for that though (a new rev of the beta is being released tonight), and on top of that the term is ending at Trinity and I couldn't just vanish. At least I could get a bit of it vicariously through weblogs. Oh well. Next time.

Categories: technology
Posted by diego on May 1, 2003 at 5:31 PM

risk? what risk?

An article by Andrew Leonard on two new books:

Complex financial instruments have made Wall Street incomprehensible to the average consumer -- and allowed "experts" to make fortunes. Two new books remind us that swindlers may have always been with us, but that today they are running the show.
This is not just the fault of those who peddle services that no one understands, it's also the fault of those who plunge in, dollar signs in their eyes (Granted, there are many people that are simply investing their life savings and should be secure, and in any case, that doesn't excuse reckless behavior on the part of "risk managers"). I am reminded of that episode of the Simpsons where Homer puts the family's life savings into a company called "Animotion". He goes to buy the stock and the stockbroker says: "Sir, are you sure you understand the risks of stock trading?". Cut to those priceless images of what goes on in Homer's brain and you see a display of dancers, homer in the center, and a giant gorilla that appears behind courtains, growling and holding fistfulls of dollars, while Homer and the dancers sing: "Bring in the money... Bring in the money...". Cut back to the stockbroker, who is waiting for Homer's answer, and Homer says: "You heard the monkey. Make the trade."

LOL!

Since we're on Simpsons'-remembrance mode, that episode also has a great bit when Homer's checking the moves on the Animotion stock he just bought. He calls an automated phone service with voice recognition. On the other side there's one of those computer voices that speak-in-unnatural-breaks.

Computer: Please say the name of the stock you wish to check.
Homer: Animotion.
Computer: Animotion. Up-one. (or something of the sort).
Homer: (Celebrates) Yahoo!
Computer: Yahoo. Up-ten.
Homer: (looking puzzled at the phone) Eh? What is this crap?
Computer: FOX Networks. Down-eight.

Hilarious.

Categories: technology
Posted by diego on May 1, 2003 at 5:18 PM

waiting for DSL

Since this happened in the middle of the push for the release I haven't had time to comment on it. But here goes.

On April 11 Eircom (the local phone monopoly) started offering a lower-speed DSL service than the one they had before. This one is 256-512 Kb/s, max 4 GB transfer (don't get me started on the upper-bound of the transfers) and costing Euro 50 a month, while the previous service they had available was 512 Kb/s, max 6 GB transfer, at Euro 110 (!!) per month. I would have signed up for the other service, even at the outrageous price, but Eircom people said that "my line didn't qualify". What this meant was a mystery for months, until, when calling about the new service, I was told that apparently my line didn't qualify because of the distance to the exchange, that is, at the distance I was from the exchange they couldn't guarantee 512 Kbps and so they wouldn't give me service at all.

So now, with the new service, where the guarantee is minimum 256 Kbps, apparently they can offer the service. So far, so good. Okay, maybe not so good, but so far at least.

Now, I ordered the service on April 11th. That is, almost three weeks ago. I ordered a self-install package, since I figured that would be faster, and cheaper (although not much--just Euro 50 cheaper). Back then, they said that it would take 10 days for the package to arrive, and in the meantime I'd get a call informing me that they had activated DSL at the exchange and I could begin to use it. I got a first call the next day, saturday, at 3 pm, giving me my username and password for PPPoE. I was slightly annoyed at the timing (I mean; they can call you whenever the hell they please, but you, oh no, you can call THEM only on business days, 9 to 5, yes sir), but at least seems to be moving.

No such luck.

The modem package arrived after two weeks, last friday. I connected it and had to spent an hour fiddling with it since the default settings are not what they advertise on the manual. Even after repeated resets (Holding the reset button while you turn on the modem and then keeping it pressed for one whole minute) it still had the wrong setting. So in the end I guessed an IP/netmask, with empty gateway, and I could connect to the modem, and then reset the DHCP configuration in it. BTW, this modem is one of those dreaded modems that come predefined with a password of "1234" for the root. This password lets you not only configure the modem whichever way you please, but also sniff the packets, set up sites that can't be visited, filtering, and it gives you access to the PPPoE password if you had it configured in the modem. Nice eh? Guess how many people will even know that password exists, let alone that they have to change it.

Anyway, so now I've got the modem connected to the PC, but there's no DSL signal. I've called Eircom twice since friday and they always speak in the usual "astrologer's babble" as I call it: "We are not sure when, but it will start working. Soon." To which I say, "Soon? What do you mean soon? Soon tomorrow? Soon friday? Soon next month?". And the utterly friendly support-person replies "Impossible to know. They will call you."

I hate when they do that. Apparently they can't give a straight answer. Of course, it's not believable that they don't know. They should just get on the cluetrain and stop treating users as if they were to be maintained on a "need-to-know" basis.

So here I am, waiting for that magic call that will say that the modem has been activated. In the meantime, I have it turned on and I am actively polling the connection, so it's likely that I will know before they call. Hopefully it won't take forever.

Categories: technology
Posted by diego on April 30, 2003 at 9:49 PM

on spam

As I've been thinking about spam control with clevercactus, I noticed several spam-related news in recent days: a couple of days ago the New York Times ran an editorial, Crack down on Spamthat was good. The on monday AOL, Microsoft and Yahoo! announced they would be coordinating anti-spam efforts. And in Virginia, in the US, "fraudulent" spam was made illegal. Patience with spam seems to be past the limit. I know mine is.

Categories: technology
Posted by diego on April 30, 2003 at 8:00 PM

music moves

A couple of things of note have happened during the last week regarding digital music. First, late last week a Judge delivered an impressive victory for Grokster and Streamcast (developers of Morpheus). Whether it is upheld on appeal or not is another matter. And then Jobs made Apple-music store rumors history by actually coming out with the announcement as expected (without his usual "and-here's-something-you-didn't-really-expect routine, although apparently the product is very good--too bad it's available only for Mac). Regarding the Apple announcement, Arcterex posted a rant that, although a bit harshly worded :-) is quite appropriate. Whether it was Ogg Vorbis or MP3, Apple choosing yet another digital format wasn't so cool--although it might be understandable from the business side of things, because it's not as if you're going to control where or how MP3s are copied, right? Karling gave it a try and posted her impressions, with an update later.

I'd say that, if anyone can pull this off, it's Apple. It would be ironic if, in a few years, Apple ends up being a more successful music distributor than the record companies themselves. Already Universal Music is shopping itself around. Others could follow if the current slump extends. If the device makers end up owning or controlling the music distribution channels, the world of "music for free" will be pushed underground. Sony already distributes products that exert some level of control over what you can do with the digital media, which is understandable from their POV since they also own Sony Music. Maybe this is what it had to come down to: the record industry couldn't do it by itself. The technology companies were just the next in line.

Categories: technology
Posted by diego on April 29, 2003 at 9:27 PM

udell on weblogs

Jon Udell has a cool entry today on his latest take on weblogs. Right on the mark, as usual.

There are many, many things that I learned reading Udell's Byte columns in the early and mid-90s. Then byte went dead when CMP acquired it, and it is back now, sort of, as the site demonstrates, but the glory days are past. Still a good read though.

Categories: technology
Posted by diego on April 29, 2003 at 9:11 PM

cringely on open source

Some Machiavellian ideas from Robert X. Cringely on what he would do if he had to defend against open source (near the end of the article). Nasty.

Categories: technology
Posted by diego on April 29, 2003 at 9:05 PM

gerard's weblog

New in the blogsphere: Gerard Collin. Gerard's been really helpful in sending reports for spaces and now clevercactus.

Here's his first entry. Welcome! :-)

Categories: technology
Posted by diego on April 29, 2003 at 11:51 AM

same old microsoft - take 3

Okay, last one on this subject for a while-- I don't want the blog to turn into a pro-anti-or-whatever-microsoft discussion. BTW, Murph emailed me a pointer to this article from The Register on the default setups of OSes, and how they affect security. Very interesting, particularly the MS-vs-Solaris comparison.

On to the subject, Murph made more comments to my previous reply:

But they don't bundle the things that they're talking about leveraging (SQL server and exchange) and that's why its a bad article. Is novell going to be any different for packaging the AMP from LAMP into Netware?? Only because its not MS.

OTOH talk about the added value of combing (say) Outlook and Exchange and how that in turn stuffed Groupwise I'll be cheering you on...

Or, to come down a step, about the way they roll UI changes round between office and the desktop.

Two points I want to make here.

1) Murph says "Is novell going to be any different for packaging the AMP from LAMP into Netware?? Only because its not MS." Exactly!Microsoft holds a monopoly. Holding a monopoly in and of itself is not illegal; using it in predatory fashion is. Monopolists have to abide to more stringent set of rules than non-monopolists, as far as I'm concerned. If Novell was holding 90% of their market, then we'd be discussing their moves too.

2) Regarding "OTOH talk about the added value of combing (say) Outlook and Exchange and how that in turn stuffed Groupwise I'll be cheering you on...". I think Murph's point was how Microsoft's bundling in the Outlook+Exchange combination made others consider bloatware a good strategy. That is indeed another of the bad sides of massive bundling (even loose bundling, ie., not built-in but easy to integrate).

I just want to make another clarification: I have argued before (in a discussion similar to this one with Murph, actually :-)) that I am not anti-microsoft, or pro-microsoft either. Maybe I haven't made it explicit enough, but my mention in the previous entry of how they could, if they chose to, compete on the merits on their software, was along those lines. They have one of the best software engineering organizations in the world. Their products sometimes are not to par (well, okay, their first releases rarely are), but nobody else has to deploy millions of copies on their first release either. As far as I'm concerned, if they accepted that they are a monopoly, and behaved accordingly, they would be ok in my book. Bundling nonsensically just for the sake of grabbing market share wouldn't matter much: people would choose other products if they thought that was best. The best approach would win.

Agreed, that's a bit idealistic. But these are just ideas anyway. No one gets hurt by saying them out loud. :-)

Enough of Microsoft for a while though. There are a ton of other cool things happening--and who knows, if they get to have enough impact, this whole discussion might be rendered moot anyway! (That's the idealist again talking! :-))

Categories: technology
Posted by diego on April 28, 2003 at 4:42 PM

trackback trouble

Koz apologizes for the multiple trackback pings to my previous entry on clevercactus beta 2. Apparently movabletype is responding that the ping failed, even if it didn't. Weird. Koz, don't worry, especially since it wasn't your fault! Good to know that can happen though.

Categories: technology
Posted by diego on April 28, 2003 at 12:21 AM

same old microsoft

From The Economist:

Microsoft has said that it will make it easier for rivals to design software to connect with its Windows operating system. However, the company is also about to launch another server operating system that will be bundled and integrated with lots of other Microsoft products: exactly the sort of behaviour that got it into trouble in the first place.

Categories: technology
Posted by diego on April 28, 2003 at 12:15 AM

WiFi and phones: coming together

Mobitopia

Slightly old news--I'm still catching up on some things I wanted to comment on.

A few days ago cisco announced that it would release a WiFi Phone in June. Quote:

The 7920 phone is essentially a wireless version of Cisco's 7960 IP (Internet Protocol) phone, which uses a wired Ethernet connection to make and receive telephone calls. However, the 7920 will have a wireless handset that uses an office's Wi-Fi network to connect. The device will start shipping in June, executives said Friday. Its price has not yet been disclosed.
Now, this is definitely interesting, but the real test will be when one of the cellphone carriers comes out with a (four-band?) 3G-GPRS-WiFi Phone. One of the problems of WiFi is, besides its limited range, its power consumption. A WiFi phone would need to be huge for its batteries to last more than a few hours of use (of standby mode, with active transfers it would last even less). While one would imagine that WiFi in phones won't take us back to the brick-cellphones of the 80s, and certainly at some point WiFi will become less power-hungry (probably in one of the derivative transports) in the meantime, it makes a lot of sense to have a phone were WiFi can be activated if we know we're in presence of a hotspot, and transparently route the calls through it, hopefully making them cheaper, and then revert to GPRS or whatever when out of range. Certainly this affects the business models of the carriers, but hopefully they'll see that if they don't do it, someone else will. Just like IP phones have been growing and long-distance carriers have had to take notice, so will the cell phone providers. The next couple of years will show whether they have learned that they can't fight a trend like this, but they could reap the benefits if they jump on it.

Categories: technology
Posted by diego on April 28, 2003 at 12:07 AM

management by blog

Found this link somewhere (can't remember where--left the tab in Mozilla open all day...) from Business 2.0: Management by Blog?. I think it is inevitable that weblogs will become more widely used within companies--they're superior in just about every way to mailing lists. Even when counting on discussion software of some sort, or sharing, it's still a good idea to have a copy on an intranet, and that's a blog, not matter which tool you used to create it.

What I do hope is that they don't start to overhype blogs as "the next silver bullet" for management or whatever... it's just another tool. If the organization is screwed up, there is no amount of tools you can install that will fix it. Magazines have a way of lifting things up only to bring them down, too. So here's hoping.

Categories: technology
Posted by diego on April 27, 2003 at 1:27 AM

six apart announcements

Six Apart (the company behind Movable Type) today made several announcements today, including new weblog software, new board, and an investment. Cool! Congrats to Ben and Mena. :-)

Categories: technology
Posted by diego on April 23, 2003 at 6:10 PM

Chandler 0.1

OSAF has released Chandler 0.1. Russ is a bit dissapointed (he also posted a screenshot). I haven't had time to download it yet, even less to run it. Interesting though.

Categories: technology
Posted by diego on April 22, 2003 at 5:50 PM

good software takes ten years

I found this old article from Joel on Sofware: "Good software takes ten years, get used to it." I don't agree with the ten-year rule for adoption, (I'd say that a large percentage of the most commonly used software today did not have ten years of evolution under its wing before it was massively adopted, most notably web browsers, and a ton of web applications. As for the product to be mature, and not require more updates, ten years is probably right.

In any case, a great read.

Categories: technology
Posted by diego on April 22, 2003 at 12:44 PM

of pet psychics and google servers

An article by Robert Cringely from a couple of weeks ago where, besides describing an amusing anecdote about a 'pet psychic' he consulted, he talks about many disconnected things that are nevertheless interesting. Example:

First there is Google, which runs four enormous data centers around the world containing in excess of 10,000 servers. It is the largest Linux cluster of all, and is constructed entirely of generic beige box PCs interconnected by 10/100 Ethernet. These are not racks and racks of state-of-the-art blade servers, just el cheapo PCs. So the magic must be in the software.

Now here is the part that sticks in my mind: the fault tolerant nature of the cluster is such that if a machine fails, the other machines simply take over its functions. As a result, whenever a server fails at Google, THEY DO NOTHING. They don't replace the broken machine. They don't remove the broken machine. They don't even turn it off. In an army of drones, it isn't worth the cost of labor to locate and replace the bad machines. Hundreds, maybe thousands of machines lie dead, uncounted among the 10,000 plus.

We have reached the point where we are totally dependent on computers, yet the marginal cost of a computer -- at least for Google -- is nothing. This may be an historical first.

Wow.

Categories: technology
Posted by diego on April 21, 2003 at 8:00 PM

Mosaic turns 10

Tomorrow, April 22nd, is the tenth anniversary of the first release of Mosaic. News.com has been running a series of articles to commemorate the event. Good reading, but it omits one glaring fact: that the browser existed before mosaic was released.

Mosaic had one big advantage over the others: it was able to inline images with the text (yes, that was one of the big Mosaic innovations). Other UI elements where also first built into, or refined, in Mosaic. It wasn't the first browser, but it was probably the first one that was usable.

Ah, the memories. I remember the first website I ever did, an intranet for the company I was working for. God it was ugly. And HTML didn't even have tables. Tables!! Windows 95 didn't come with a TCP/IP stack at first, you had to install it yourself... And downloading the first public beta release of Navigator (version 0.9 was it?)... and everyone just switched to it since it seemed like a nicer version of Mosaic.

This is sort of a cliche by now, but... to think that ten years ago the web (as such) didn't really exist: only the technologies that made it possible did. Windows 95 was the greatest thing in the tech horizon. Netscape was a word no one had heard of. And Java was a kind of coffee, or an island of Indonesia, depending on who you asked. When I woke up in the morning, I didn't think of checking my email, and if I wanted to get news, I had to get the newspaper, turn on the radio, or wait for the TV to give them, that is, unless you had CNN (which back then had just emerged as a news powerhouse, after the first Gulf War).

I mean: wait for the news!. What a concept.

Happy birthday, Mosaic!

Categories: technology
Posted by diego on April 21, 2003 at 1:52 PM

why the Apple-Universal rumors

Last week I was ranting a bit about the rumored deal of Apple buying Universal Music. I just saw this article by Robert X. Cringely, where he explains his theory as to why Apple would enter discussions like these at all, and, moreover why would Jobs allow the story to leak, letting the stock take a hit (which was a foregone conclusion in this market, and with Apple's and Universal situation).

Now, who can say if it's true, but it's a really interesting theory. Or should I say "entertaining"? :-)

Categories: technology
Posted by diego on April 20, 2003 at 2:48 AM

on mobile phones and pricing schemes

Mobitopia

Talk about barriers to entry...

Mobile phone pricing is downright baffling. For all the flexibility that is built into how you pay (contract, pay as you go, subsidized handsets or not, data prices, and so on), it is incredibly complicated to try to make an informed decision on what to buy and from which carrier. Yesterday I made some comments regarding pricing here in Ireland, and Murph corrected me, saying:

Oh dear, you've been done by the misleading (downright bad) way phones are priced - the provider subsidises them and you pay them back for so long as you stick on the same contract.

This is bad because people think phones are cheaper than they are (real UK prices http://www.expansys.com/d_gsm.asp) and because the monthly service charge includes a clawback of the subsidy.

So I don't quite get how giving you a discount (subsidy) of ~300 euros against a simm free price for a 7650 is "ignoring the customer"?

We'd all be better off if they charged the rrp for the phones (with nice consumer credit arrangements to spread the cost) and we purchased the service separately - which is more or less what I do (-:

Obviously I was wrong about the UK prices, but I want to point out that the prices I was mentioning for the phones are not SIM-free. They require a contract, 12-month minimum. SIM-free prices are twice that at least. The 7650 can be purchased SIM-free for Euro 469 at Vodafone, and about the same price at O2. The UK still wins on that, I think.

Now, my question is, why does this confusion happen at all? Certainly one could see the "logic" in keeping prices obscure, difficult to understand: the carriers get less mobility of users since it's not clear at all whether you win or lose by switching (while the problems of switching are clear). But in the end, this kind of behavior from the part of the companies simply means angry customers that will switch to something else the second it becomes available.

On top of that, carriers play certain games with their offerings that are difficult to understand as well. I went into a store today to inquire about the SonyEricsson P800 and the Nokia 3650. While the 7650 is available here in Ireland, those two phones aren't? The reason? According to sales people from Vodafone and O2, the carrier hasn't "certified" the product to be sold.

Excuse me? Certified?

To me, plain and simple, they prefer to keep milking their existing product lines with high prices, since they know that once higher-end offerings are available the basic phones must be sold for less. This is ridiculous in more than one sense, since the added services (for which they have invested billions in infrastructure) should bring more money, it's in their interest to move users to the new platform.

And, while we're at it, how can this happen at all? The EU was supposed to bring these barriers down, wasn't it? This might (emphasize, might) be understandable in the more-fragmented US market, but when you've got basically two major players (Vodafone and O2) in Europe, there are no market reasons for this to happen. The P800 has been available for some time in other european countries. And this is not new. I remember almost two years ago when the first GPRS phones appeared (e.g., the Nokia 6310) it took for them about nine months to show up here in Ireland. They have no problem in hyping the technology and spending millions in advertisements, but somehow they won't, or can't, deliver it.

The carriers should wake up: giving customers more choice is good. Clearly showing the costs of something is good. Fight on features and service, instead of artificially maintaining their market position. Once that arrives, the market will expand, I'm sure, and the lives of developers and users would be easier as well.

Categories: technology
Posted by diego on April 17, 2003 at 1:04 PM

a chat with russ

Had a cool chat with Russ tonight... he showed me a short spec of the new stuff he's working on, VERY cool. Can't wait to see it running!

I gotta get myself one of these intelliphones... for cool new things like what Russ is working on, and also to test spaces synchronization against it. Finally the Nokia 7650 can be obtained here in Ireland... but the 3650 is better, and while they supposedly have it, most stores don't. On top of that, it's really expensive, even with a contract. The 7650, which was one of the first intelliphones out there, can't be obtained for less than $100, and that's if you sign a contract for Euro 50 a month for a year at least. My own provider (O2) will not upgrade me for less than Euro 200 (!!) because I haven't spent more than Euro 1500 a year (!!!!). Talk about ignoring the customer. And all that, with the UK having incredible prices... shipping across the Irish sea must be really, really expensive no?

Oh well.

Categories: technology
Posted by diego on April 16, 2003 at 12:33 AM

apple goes into... music

What?

Yes, apparently.

This NY Times article says that they are going to come out with an online music store. (Yes, the article focuses on them investing in Universal music, but that's not really the important part IMO).

What are they thinking? I understand it must be tempting, since their users are loyal and iPods are flying off the shelves. It might even make some money. But isn't it stretching Apple's capabilities too far? Isn't apple a technology company, not a music service, or whatever? Wouldn't it be better for them to concentrate on what they know how to do?

I dunno.

A net-second later: I found this News.com story that says that Apple is looking at buying Universal Music (?!?!?!). Makes even less sense. But I guess that since Jobs has some knowledge of managing a content-creation business through Pixar, it wouldn't be an outright disaster right away. Not too confident it would work out in the long run nevertheless.

Categories: technology
Posted by diego on April 13, 2003 at 12:06 AM

steal this barcode

Found through this Salon article: Re-code.com:

The Web site Re-Code.com parodies the design and chipper lingo of Priceline.com's "name your own price" shopping site. It invites shoppers to "recode your own price," by making their own barcodes using the site's barcode generator. The theory: There's just a 10-digit number standing between you and a better deal on anything that you want in a store, and this site will help you crack the code.
One of those strange things that the net makes eminently easy. The article wonders: "Is it social commentary, or shoplifting?". On one hand, it certainly sounds like an interesting thing that this can be possible. On the other, it doesn't feel entirely right. One thing's for certain, it's yet another way in which the Internet has shown the problem with "old techniques". Same as in the music industry, where before you could copy things if you wanted, but the quality was probably not goot, and it was a chore. Suddenly it's easy, and everyone is doing it (Not that everyone is stealing barcodes, I assume...). But it does come from an unexpected source, doesn't it?

Categories: technology
Posted by diego on April 12, 2003 at 4:43 PM

blogs, scopes, and human routers

Jon Udell has posted an interesting entry on the nature of distributed communications:

If I am seeking or sharing information, why do I need to be able to address a group of 3 (my team), or 300 (my company), or 300,000 (my company's customers), or 300 million (the Usenet)? At each level I encounter a group that is larger and more diffuse. Moving up the ladder I trade off tight affinity with the concerns of my department, or my company, for access to larger hive-minds. But there doesn't really have to be a tradeoff, because these realms aren't mutually exclusive. You can, and often should, operate at many levels.

Categories: technology
Posted by diego on April 10, 2003 at 5:36 PM

an interview with the inventor of mobiles

Mobitopia

Martin Cooper, who made the first cell phone call, speaks out in this interview against "featuritis" and the need to solve the problem of giving good voice service first, and then worrying about the rest. It reminds me of the Motorola dreamers that came up with Iridium (remember Iridium?), which were clobbered because the coverage that people had was "good enough" and not everyone needed to make phone calls from the South Pole. It is true that quality is still not as good as it should be, considering all the money being spent on multimedia services and such. But to me, once we get to good data transmission, the issue will be moot. We have problems now because it's analog. With digital, and enough bandwidth, quality should be easily set by the user (more bandwidth == more quality == more money, and so on). Regulation is the main factor, I think, as I've argued before. One of those cases were less is more. :-)

Categories: technology
Posted by diego on April 7, 2003 at 11:04 PM

TEXTing in the US

Mobitopia

The Economist has an interesting article this week about why TEXTing hasn't taken off in the US. The article states:

[...] although texting has become commonplace in Europe and Asia, it has failed to take off in [...] America. Globally, the average number of messages sent or received each month by a mobile subscriber is now around 30, or one message per day. Each message costs an average of $0.10 to send. In some parts of Asia, such as Singapore and the Philippines, where large numbers of free messages are thrown in with monthly pricing plans, the number of messages sent per subscriber per month is as high as 200. But the figure for America is just over seven, according to the Cellular Telecommunications Internet Association, an industry body. Why is such a high-tech nation eschewing texting?

The short answer is that, in America, talk is cheap. Because local calls on land lines are usually free, wireless operators have to offer big “bundles” of minutes—up to 5,000 minutes per month—as part of their monthly pricing plans to persuade subscribers to use mobile phones instead. Texting first took off in other parts of the world among cost-conscious teenagers who found that it was cheaper to text than to call, notes Jessica Sandin, an analyst at Baskerville. But in America, you might as well make a voice call.

That's really only part of the story. There are many uses for texting for short communications, and it has nothing to do with the call cost. Making a call in those situations is simply more cumbersome and generally unnecessary.

Example: I am going to a meeting, but running late. A co-worker, who is already at the meeting, is stalling for time while I arrive. Now, in this situation, you don't want to interrupt the other person. So sending a message that says "Stuck in traffic. Be there in 10 minutes" is much better than calling. All the necessary information is in the message. My co-worker can get it without having to interrupt his conversation, and if necessary he can give me a call. Sending a text message is clearly better in that situation, but there are many other cases. Anything that requires specific, short communications is better served by messages (another example: wife, while bathing the baby, sends a text to her husband, who is at the supermarket "Remember to buy lettuce!" or whatever).

The bigger factor to me, then, is implementation: since the networks in the US are generally incompatible for cross-sending of text messages, people can't really use them efficiently. I think that once that's solved, it will take off, just like everything else. For proof, look at the success of the BlackBerry, which has been mainly in the US, and which is a more expensive (and fancier, true, with more features) version of the same concept.

Categories: technology
Posted by diego on April 5, 2003 at 2:50 PM

3G and WiFi

A collective effort from us mobitopians: 3G vs. WiFi.

Categories: technology
Posted by diego on April 5, 2003 at 2:36 PM

ventureblog

[via Russ]: Vent