Now blogging at diego's weblog. See you over there!

alarm! alarm!

I was woken up this morning (before 7 am) to the horrible screeching of the building's fire alarm (yes, the building's not my apartment's). It's a pretty bad way to wake up. I was about to go down and see what was happening when it stopped. This has been a problem once before in the past. Needless to say, I couldn't go back to sleep, and it took me a while to get the sound out of my head and to quell a rising headache. Then I got to work, write a bit, etc.

A few minutes back I was right "in the zone" and the alarm went off again. Damn. I went down. "They are fixing it". Right. Then it stopped.

About 10 minutes ago it started up again and they are stopping it and starting it intermittently as they try to fix it. It's impossible to concentrate. If this keeps up, I'll go for a walk. And hopefully they'll get it fixed for the rest of the day at least, so I can get some work done!

Ah, the wonders of technology.

Later: The Alarm stopped. I went out for a walk anyway. Brrr it's cold. says "5 C, feels like -1 C". I believe it. Plus, the weather's crazy, there's very strong winds so whole stormfronts move in and out of sight within minutes (I'm not exaggerating). When I got back the alarm started up again. Still fixing it, it seems, and we building-dwellers can do nothing but wait (and complain :)).

Categories: personal
Posted by diego on January 11, 2004 at 3:04 PM

postel's law is for implementors, not designers

Another discussion that recently flared up (again) is regarding the applicability of constraints within specifications, more specifically (heh) of constraints that should or should not be placed in the Atom API. The first I heard about this was through this post on Mark's weblog, where among other things he says:

Another entire class of unhelpful suggestions that seems to pop up on a regular basis is unproductive mandates about how producers can produce Atom feeds, or how clients can consume them. Things like “let’s mandate that feeds can’t use CDATA blocks” (runs contrary to the XML specification), or “let’s mandate that feeds can’t contain processing instructions” (technically possible, but to what purpose?), or “let’s mandate that clients can only consume feeds with conforming XML parsers”.

This last one is interesting, in that it tries to wish away Postel’s Law (originally stated in RFC 793 as “be conservative in what you do, be liberal in what you accept from others”). Various people have tried to mandate this principle out of existence, some going so far as to claim that Postel’s Law should not apply to XML, because (apparently) the three letters “X”, “M”, and “L” are a magical combination that signal a glorious revolution that somehow overturns the fundamental principles of interoperability.

There are no exceptions to Postel’s Law. Anyone who tries to tell you differently is probably a client-side developer who wants the entire world to change so that their life might be 0.00001% easier. The world doesn’t work that way.

Mark then goes on to describe the ability of his ultra-liberal feed parser to handle different types of RSS, RDF and Atom. (Note: I do agree with Mark that CDATA statements should be permitted, as per the XML spec). In fact I do agree with Mark's statement, but I don't agree with the context in which he applies it.

Today, Dave points to a message on the Atom-syntax mailing list where Bob Wyman gives his view on the barriers created by the "ultra-liberal" approach to specifications, using HTML as an example.

I italicized the word "specifications" because I think there's a disconnect in the discussion here, and the context in which Postel's Law is being applied is at the center of it.

As I understand it, Mark is saying that writing down constraints in the Atom spec (or any other for that matter) is something to be avoided when possible, because people will do whatever they want anyway, and it's not a big deal (and he gives his parser as an example). But whether his parser or any other can deal with anything you throw at it is beside the point I think, or rather it proves that Postel's law is properly applied to implementation, but it doesn't prove that it applies to design.

Mark quotes the original expression of Postel's Law in RFC 793, but his quote is incomplete. Here is the full quote:

2.10. Robustness Principle

TCP implementations will follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others.

(my emphasis). The comment in the RFC clearly states that implementations will be flexible, not the spec itself. I agree with Mark's statement: there are no exceptions to Postel's law. But I disagree in how he applies it, because it doesn't affect design, but rather implementation.

Getting a little bit into the semantic of things, I think it's interesting to note that placing a comment like that on the RFC is actually defining accepted practice (dealing with reality rather than the abstractions of the spec) and so it is a constraint (a constraint that requests you accept anything, rather than reject it, is nevertheless a constraint). So the fact that this "Robustness principle" is within that particular RFC as an example shows that placing constraints is a good idea.

Implementations can and often do differ from specs, unintentionally (ie., because of a bug) or otherwise. But the less constraints there are in a spec, the easier it is to get away with extensions that kill interoperability. So I don't think it's bad to say what's "within spec" and what is not within spec. Saying flat-out that "constraints are bad" is not a good idea IMO.

One example of a reasonable constraint that I think would be useful for Atom would be to say that if an entry's content is not text or HTML/XHTML (e.g., it's a Word document, something that as far as I can see could be done on an Atom feed according to the current spec) then the feed must provide the equivalent text in plain text or HTML. Sure, it might happen that someone starts serving word documents, but they'd be clearly disregaring the spec, and so taking a big chance. Maybe they can pull it off. Just as Netscape introduced new tags that they liked when they had 80 or 90% market share. But when that happened, no one had any doubts that using that tag was "non-standard". And that's a plus I think.

So, my opinion in a nutshell: constraints are good. The more things can be defined with the agreement of those involved, the better, since once something is "out in the wild" accepted practices emerge and the ability to place new constraints (e.g., to fix problems) becomes more limited, as we all know.

What I would say, then, is: Postel's law has no exceptions, but it applies to implementation, not design.

Posted by diego on January 11, 2004 at 2:23 PM

syndication-land happenings

A week has gone by since I came back and I'm still catching up with some of the things that have happened in syndication-land. For example, recently, Dave released a new "subscription aggregator" (meta-aggregator?) that allows members to see who's subscribing to who and do other interesting things. I haven't had time to digest all of its implications so I'll leave the comments for later, but from what I understand (and I haven't registered yet) it looks very cool.

Another example: Feedster has released an RSS feed for their "Feed of the day" feature, and they are doing a number of other interesting things that I haven't yet had time to fully explore. And, btw, my weblog was one of the first "feeds of the day" that was chosen (look in this page near the bottom). This happened months ago! Totally missed it. Thanks!

More later as my brain processes it. :)

Categories: technology
Posted by diego on January 11, 2004 at 1:20 PM

from components to modules

Right now I'm refactoring/rebuilding the user interface of a new release coming out soon (oh right... Note to self: talk about that) and I'm facing the fight against "sticky" APIs. Or, in more technical terms, their coupling.

Ideally, a certain component set that is self-contained (say, and HTML component) will be isolated from other components at the same level. This makes it both simpler, easier to maintain and, contrary to what one might think, often faster. While I was at Drexel, at the Software Engineering Research Group, I did work on source code analysis, studying things like automatic clustering (paper) of software systems, that is, creating software that was able to infer the modules present on a source code base using API cross-references as a basis. Since then I've always been aware (more than I was before that, that is) of the subtle pull created by API references.

The holy grail in this sense is, for me, to create applications that are built of fully interchangeable pieces, that connect dynamically at runtime, thus avoiding compile-time dependencies. In theory, we have many ways of achieving this decoupling between components or component sets; in practice there are some barriers that make it hard to get it right the first time. Or the second. Or...

First, the most common ways of achieving component decoupling are:

  1. Through data: usually this means a configuration file, but it could be a database or whatever else is editable post-compilation. This is one of the reasons why XML is so important, btw.
  2. Through dynamic binding: that is, references "by name" of classes or methods. This is useful mostly with OO languages, as you'll generally end up dynamically allocating a superclass and then using an interface (or superclass) to access the underlying object without losing generality (and thus without increasing coupling).

Achieving decoupling in non-UI components is not too difficult (the data model has to flexible enough though, see below). But UIs are almost by definition something that pulls together all the components of a program so they can be used or managed. The UI references (almost) everything else by necessity, directly or indirectly, and visual components affect each other (say, a list on the left that changes what you see on the right).

In my experience, MVC is an absolute necessity to achieve at least a minimal level of decoupling. Going further is possible by using a combination of data (ie., config files) to connect dynamically loaded visual components removes the coupling created at the UI level, but that is difficult to achieve, because it complicates the initial development process (with dynamically loaded components bugs become more difficult to track, the build process is more complex, etc.) and development tools in general deal with code-units (e.g., classes, or source files) rather than with modules. They go from fine-grained view of a system (say, a class or even a method) to a project, with little in between. We are left with separating files in directories to make a project manageable, which is kind of crazy when you think how far we've come in other areas, particularly in recent years.

The process then becomes iterative, one of achieving higher degrees of decoupling on each release. One thing I've found: that the underlying data model of the application has to be flexible enough, be completely isolated (as a module) and relatively abstract, not just to evolve itself but also to allow the developer to change everything that's "on top" of it and improve the structure of the application without affecting users, etc.

Yes, this is relatively "common knowledge", but I'm a bit frustrated at the moment because I know how things "should be" structured in the code I'm working on but I also know that time is limited, so I make some improvements and move on, leaving the rest for the next release.

Final thought: Until major development tools fully incorporate the concept of modules into their operation (and I mean going beyond the lame use of, for example, things like Java packages in today's Java tools), until they treat a piece of user interface as more than a source file (so far, all of the UI designers I've seen maintain a pretty strict correspondence between a UI design "form" and a single file/class/whatever that references everything else), it will be difficult to get things right on the first try.

Posted by diego on January 11, 2004 at 10:17 AM

the digital life

Today's New York Times magazine has an article, My So-Called Blog on weblogs and the impact of our digital lives on the "real" world:

When M. gets home from school, he immediately logs on to his computer. Then he stays there, touching base with the people he has seen all day long, floating in a kind of multitasking heaven of communication. First, he clicks on his Web log, or blog -- an online diary he keeps on a Web site called LiveJournal -- and checks for responses from his readers. Next he reads his friends' journals, contributing his distinctive brand of wry, supportive commentary to their observations. Then he returns to his own journal to compose his entries: sometimes confessional, more often dry private jokes or koanlike observations on life.

Finally, he spends a long time -- sometimes hours -- exchanging instant messages, a form of communication far more common among teenagers than phone calls. In multiple dialogue boxes on his computer screen, he'll type real-time conversations with several friends at once; if he leaves the house to hang out in the real world, he'll come back and instant-message some more, and sometimes cut and paste transcripts of these conversations into his online journal. All this upkeep can get in the way of homework, he admitted. ''You keep telling yourself, 'Don't look, don't look!' And you keep on checking your e-mail.''

Well, we've all been there, haven't we? Okay, many of us have. Okay, would you believe me if I said I have? :-)

The article has a certain focus on "teenagers" or "young adults" for some reason. But that aside, it has some interesting comments and some good insights that apply to all groups I think. Everyone that is involved with new tools (using them... building them... whatever...) is trying to feel their way around.

And this is just text, and maybe pictures. A video here and there at most. And that creates a certain tension IMO, which won't really be gone until we can superimpose cyberspace with meatspace.

Right now if you want to "be online", mostly (and I emphasize mostly, as we all know that you could be IM'ing on your cellphone these days) you need to be sitting at a computer, and that means not being with others, or doing other things. The display, the keyboard, the whole UI experience pulls us in and demands a large part of our attention.

Result: a disconnect.

But, as I said, if the "real" (I keep putting real between quotes because I'm a subjectivist) and "virtual" worlds were superimposed things would be different. When that superimposition happens, there will be very little tension between interacting digitally and otherwise.

How do I mean? Science Fiction moment: You look at a restaurant and your glasses (or a retinal implant) superimpose a translucent image of its website. You get a person's business card and it contains a bluetooth chip that tells your PAN (Personal Area Network) about the person's email, etc, and their company webpage pulls up next to their smiling face and you see there that the product he's talking about hasn't been released yet. Or you have embedded a few key details into a wireless implant in your arm and everyone that sees you through the glasses (or the implant!) can see your weblog too, and see that you just posted a picture of them, taken with your cameraphone. Posting, browsing, and chatting, all from your local pub, pint in hand.

Okay, this is pretty lame as science fiction goes. I should brush up on my Snow Crash, Neuromancer, The Diamond Age, and all the rest...

Wait a minute. How did I get here from a New York Times article? Oh Well. :-)

Categories: technology
Posted by diego on January 11, 2004 at 12:50 AM

Copyright © Diego Doval 2002-2011.