Last weekend, I had the unexpected opportunity to participate in the nightly snoring contest held at the intensive care unit (ICU) of the neurological clinic, university of Tübingen. Such a chance comes once in a lifetime, so I could not miss it. Here's how it went:
My wife certainly considered me to be the odds-on favorite. But, alas, even wifes can overestimate their husband: Bed 1 (the contents of which have been yours truly) lost by far. At aboUT 21:00, bed 3 opened with a sonorous snore of about 80 decibel (about enough to be heard in a disco) and immediately took the lead. But even such an awesome competitor had to give in: During the night, bed 2 never ceased to impress with staccati of four to five 70-decibel-snores in a row, taking the first price with him.
Every morning there a friendly female woke me, one of the doctors, who asked to take my blood and apologized in so many words for waking me. When that was done, she continued to do the same at the other beds, effectively waking all of us.
On the last morning there (sunday) I prepared a little speech for her, which I could never hold, because I was moved from the ICU to a normal bed in the night. So, I am trying to do it here and now:
I don't know whether any of my readers has ever spent a night in an ICU bed. It's deeply depressing. The only thing to look at are the bubbles in the bottles over you, which are pooring liquids in your veins, or the monitor, which is showing your blood pressure, heart beat, and stuff like that. With three apoplectic strokes in a row behind you (I promise to stop counting in public now. My inner self is a different matter.) there isn't much to expect or even hope for. Forget about sleep: There is a continuous background noise. Light is never completely lit and every five minutes some machine is beeping alarm, ideally on another bed, but from time to time it's at your own. (Usually, because you turned yourself to the other side.) Think of a Jura coffee machine that requests service to imagine the sound. In my worst moment, the nurse saw fit to blow oxygene in my nose because I seemed to be loosing. (Usually an indication of a heart that no longer works properly, fortunately not in this case.)
After such a night, waking up is a gift! I'm still alive! I can kiss my wife today. With a bit of luck, I can hod our daughter. I can enjoy the smell of coffee. (Something, I couldn't do in the last months even if
I had coffee. But, it works again!) So, don't apologize, Dr., you're more than welcome. I can't tell, whether the other gentlemen share my feelings, but I'll be glad to give a few centiliters of blood, if I can have this day in exchange!
Saturday, November 5, 2011
Still there, world!
This is not the end. But, let's face it: This (or any future post) might very well be my last. (In fact, last saturday I'd have been surprised about the additional week that I had since then. So, it seems to be ino order to prepare. So, how can a coder like me leave the world in grace? Like this:
Sadly, I'm no wizard. For Dennis Ritchie, this would have been
(One of deaths silly jokes. That'd be style!)
But, for now and me, the only proper thing seems to be:
#include <stdio.h>
main()
{
printf("Good Bye, World \n");
}
Sadly, I'm no wizard. For Dennis Ritchie, this would have been
#include <stdio.h>
main()
{
printf("GOOD BYE, WORLD!\n");
}
(One of deaths silly jokes. That'd be style!)
But, for now and me, the only proper thing seems to be:
#include <stdio.h>
main()
{
printf("Still there, World \n");
}
Monday, September 19, 2011
A clash of generations
Yesterday was a relatively minor election in Germany, more precisely in Berlin, in its roles as german capital and as one of the german federal states. The most remarkable thing about that election was this: The german pirate party (direct link in german) got no less than 9% of the votes and 15 seats in Berlins state parliament. (Luckily, they didn't get more because they didn't have more candidates. In other words, additional votes would likely have been lost....)
The reactions fro the mainstream media are remarkably similar to those responding to the first successes of the german green party a (direct link in german) about 35 years ago, along the lines of "This could only happen in a city-state, like Berlin, not in a territorial state." (I should mention that just this year Germany got its first greeen prime minister in a federal state, Baden Württemberg, which is a territorial state. Another typical reaction: "The accountability of being in parliament will quickly dissolve voters illusions.", expecting that the result will be quite different after the next election."
I believe, what most of these responders don't get is that the pirate party is driven by a clash of generations. They won't go away so quickly, if at all.
The pirates voters are mostly people below 40 years. That's exactly the generation that was raised with, or even in, the Internet. To them, the the Internet provides value. It's important. Things like "Vorratsdatenspeicherung" (telecommunications data retention) real name policies, various degrees of censorahip (regardless of the alleged reason: terroriam, child pornography, nazism, not to mention political grounds (Iran, China, Northern Korea) orcopyright violations) are threatening this value. Threatening something important that is.
Take, on the other hand, the elder generation. The internet isn't important to them. It's a toy, that their children or grandchildren are playing with. A real lot of them are even considering a threat. (I remember some politicians assuming that the recent terror attacks in Norway wouldn't have happened withut the Internet. Similar voices can be heard after each and any amok run. Guess the age of such politicians.) They are quixk to call for exactly those things that the younger generation perceives as a threat. To the elders, it's the cure.
To me, that's the same situation that we had when the green party was founded. Our generation considered the protection of the environment as important, our parents and grandparents considered it as a thread (mostly economical). The greens didn't go away. There time came when our generations and those of our children outnumbered our anchestors. I believe the time of the piraztes (or whoever follows them, should they break apart) Perhaps we'll have the first pirate prime minister in another 30 years?
The reactions fro the mainstream media are remarkably similar to those responding to the first successes of the german green party a (direct link in german) about 35 years ago, along the lines of "This could only happen in a city-state, like Berlin, not in a territorial state." (I should mention that just this year Germany got its first greeen prime minister in a federal state, Baden Württemberg, which is a territorial state. Another typical reaction: "The accountability of being in parliament will quickly dissolve voters illusions.", expecting that the result will be quite different after the next election."
I believe, what most of these responders don't get is that the pirate party is driven by a clash of generations. They won't go away so quickly, if at all.
The pirates voters are mostly people below 40 years. That's exactly the generation that was raised with, or even in, the Internet. To them, the the Internet provides value. It's important. Things like "Vorratsdatenspeicherung" (telecommunications data retention) real name policies, various degrees of censorahip (regardless of the alleged reason: terroriam, child pornography, nazism, not to mention political grounds (Iran, China, Northern Korea) orcopyright violations) are threatening this value. Threatening something important that is.
Take, on the other hand, the elder generation. The internet isn't important to them. It's a toy, that their children or grandchildren are playing with. A real lot of them are even considering a threat. (I remember some politicians assuming that the recent terror attacks in Norway wouldn't have happened withut the Internet. Similar voices can be heard after each and any amok run. Guess the age of such politicians.) They are quixk to call for exactly those things that the younger generation perceives as a threat. To the elders, it's the cure.
To me, that's the same situation that we had when the green party was founded. Our generation considered the protection of the environment as important, our parents and grandparents considered it as a thread (mostly economical). The greens didn't go away. There time came when our generations and those of our children outnumbered our anchestors. I believe the time of the piraztes (or whoever follows them, should they break apart) Perhaps we'll have the first pirate prime minister in another 30 years?
Sunday, August 28, 2011
The mess that is m2e connectors
- Warning: The following is most likely stubborn, unreasonable,one-sided, and ignores a lot of facts, of which I am unaware.
M2Eclipse has recently been moved to the Eclipse Foundation. It is now called M2E, lives at eclipse.org and can be installed from within Eclipse Indigo as a standard plugin, just like CDT, or WTP, which is, of course, a good thing.
So, when Indigo was published (together with M2E 1.0 as a simultaneous release), I rushed to load it down in the hope of a better user experience. But the first thing I noted was: M2E was showing errors in practically every POM I have ever written, and there are quite a few of them, including those of several Apache projects and those at work. So,as a first warning:
M2E 1.0 is incompatible with its predecessors. If you want to carry on using it without problems, don't upgrade to Indigo, or try using an older version of M2Eclipse with it (I haven't tried, whether that works. The reason for this intentional incompatibility (!) are the so-calles M2E connectors, which, I am sure, have driven a lot of people to madness since their invention. In what follows I'll try to outline my understanding of what the connectors are and why I do consider them a real, bloody mess.
I am still not completely sure, what problem the connectors ought to solve, but from my past experiences I guess something like this:
M2E allows you to run Maven manually. You can invoke a goal like "mvn install" from within Eclipse just as you would do it from the command line. That works (and always worked) just fine. Unfortunately, Maven is also invoked automagically from M2E whenever Eclipse builds the project, for example after a clean. In such cases M2E acts as an "Eclipse Builder". It is these latter invocations that people have always had problems with and that the connectors should handle better. First of all, what are these problems?
- Builders can be invoked quite frequently. If automatic builds are enabled and you are saving after every 10 keys pressed, the builders can be invoked every 20 seconds or so.
- The UI is mainly locked while a builder is running. In conjunction with the frequent invocation that means that the UI can be locked 80% of the time, which a human developer considers extremely painful, in particular, if the builder invokes Maven, which can take quite some time.
- Some Maven (I am unaware of any in reality, but the M2E developers are mentioning this quite frequently)plugins assume that they are invoked from the command line. That means, in particular, that System.exit is called once Maven is done. Consequently, they consider use of resources as unproblematic: They acquire a lot, including memory and don't release it properly. The resources are released automatically by System.exit. But that doesn't work in M2E which runs as long as Eclipse does (meaning the whole day for Joe Average Developer) and invokes Maven (and the plugin with it) again and again.
- M2E doesn't know whether a plugin (or, more precisely, a plugins goal) should run as part of an automatic build. For example, source and resource generators typicallyshould, artifact generetors typically should not. Consequently, a lot of unnnecessary plugins are invoked by the automatic build, slowing bdown the builder even more, while necessary goals are not. This is not what people expect and leads to invalid behaviour on the side of the developer. For exsmple, I keep telling my colleagues again and again that they shouldd invoke Maven manually, if the test suite depends on a generated property file.
M2E can invoke a plugin as part of the automatic build process if, and only if, there is a connector for the plugin, or you specially configure the plugin. (More on that configuration later on.)
And that is the main problem we are currently facing: Connectors are missing for a lot of important plugins, for example the JAXB plugins, the JavaCC plugins, the antrun plugin, and so on. The philosophy of the M2E developers seems to be that time will cure this problem, which is why they are mainly ignoring it.See, for example,
bug 350414, bug 347521, bug 350810, bug 350811, bug 352494, bug 350299, and so on. Since my first attempts with Indigo, I am unaware of any new connectors, although the lack of them is currently the biggest issue that most people have with M2E. Try a Google search for "m2e mailing list connector", if you don't believe me.
But even, if the developers were right, they choose to completely ignore another problem: You can no longer use your own plugins in the Eclipse automatic builds, unless you create a connector for the plugin, or create a project-specific configuration. (Again, more on that confuguration in due time.)
At this point, one might argue: If you have written a plugin, it shouldn't be too difficult or too much work to write a connector as well. I'LL handle that aspect below.
First of all, regarding the configuration: Absent a suitable connector, there is currently only one possibility to use a plugin as part of the automatic build: You need to add a plugin-specific configuration snippet ike the following to your POM:
<plugin>
<groupid>org.eclipse.m2e</groupid>
<artifactid>lifecycle-mapping</artifactid>
<version>1.0.0</version>
<configuration>
<ifecyclemappingmetadata>
<pluginexecutions>
<pluginexecution>
<pluginexecutionfilter>
<groupid>org.codehaus.mojo</groupid>
<artifactid>javacc-maven-plugin</artifactid>
<versionrange>[2.6,)</versionrange>
<goals>
<goal>javacc</goal>
</goals>
</pluginexecutionfilter>
<action>
<execute></execute>
</action>
</pluginexecution>
</pluginexecutions>
</lifecyclemappingmetadata>
</configuration>
</plugin>
Neat, isn't it? And so short! This would advice M2E that I want the javacc-maven-plugin to run as a part of the automatic M2E build.
So far, I have tried to be as unbiased as posssible, but now to the points that drive me sick. (As if that were currently required...)
- The space required for the M2E configuration typically exceeds the actual plugin configuration by far! If there ever was a good example of POM pollution, here's a better one.
- M2E insists in the presence of such configuration, regardless of whether I want the plugin to run or not.If it is missing, then the automatic builder won't work at all. There is no default handling, as was present in previous versions of M2E. (I won't discuss what the default should be, I'd just like to have any.)
- The M2E configuration must be stored in the POM, or any parent POM. There is no other possibility, like the Eclipse preferences or some file in .settings. In other words, if you are using IDEA or NetBeans, but there is a single project member using Eclipse, you still have to enjoy the M2E configuration in the POM. As bug 350414 shows, there are a real lot of people who consider this, at best, ugly.
- I tried to play nice and start creating connectors. But this simply didn't work: I am a Maven developer, not an Eclipse developer. And a connector is an Eclipse plugin. I'm not interested in writing Eclipse plugins. (Which Maven developer is?) But there is nothing like a template peoject or the like, only this well meant Wiki article, which doesn't help too much. For example, it assumes the use of Tycho, which only serves to make Eclipse programming even more complicated.
- The design of the connectors looks broken to me. Have a look at the AbstractJavaProjectConfigurator, which seems to be the typical superclass of a connector: It contains methods for configuring the Maven classpath, for adding source folders, for creating a list of files that have been created (or must be refreshed): These are all things that are directly duplicating the work of the Maven plugin and should be left to the Maven plugin, or Maven, alone. In other words:
- Circumventing the Maven plugin is bad. Deciding whether to run or not should be left to the plugin, or Maven. (See, for example, the comment on "short-cutting" code generation on the Wiki page on writing connectors
.)
To sum it all up:
I fail to see why we can't throw away the whole connector mess and replace it with a configurable Maven goal that should be run by the automatic build ? There is even a reasoable default: "mvn generate-resources". Let's reiterate the reasons for inventing connectors from above and compare it with this solution:
- Maven wouldn 't be invoked more frequently
- If a single Maven execution takes too long, fix the plugins that don't do a good job at detecting whether they can short-cut. Ant still does a better job here, years after the invention of Maven 2.
- If some plugins don't behave well with regard to resources, fix'em. If we can wait months or years for connectors, we might as well wait for bug fixes in plugins.
- The question whether to run a plugin or not can be left to the Maven lifecycle, if we choose a lifecycle goal like "generate-resources" Maven knows perfectly well the plugins and goals to include or exclude.
-
Friday, August 19, 2011
Alive - and kicking
For more serious matters: Last tuesday I was struck by a left sided apoplexy. The good news: I am alive. (Obviously) I am at home, having left the hospital today. Using the keyboard is still very difficult, though (Excuse for any typos ...) Need to get this better over the next weeks to become ready for the job ...
Monday, July 25, 2011
Closing the ticket
Quoting from a support ticket:
I won't name the company. (And, just to make sure: No, it wasn't my employer.)
As we are still reducing the cost of operations in the IT department, we are currently working on a limited number of service requests. As a consequence, we are unable to work on your ticket. Thanks very much for your understanding.
I won't name the company. (And, just to make sure: No, it wasn't my employer.)
Sunday, May 22, 2011
Why Jenkins is better off as an independent organization
One thing that has definitely moved me this year is the development around Jenkins / Hudson. I never even used either (although I am quite sure that I will during the 20 years of my remaining professional live), so I cannot even tell why it was moving me, but I definitely followed with real concern. May be, that it was due to the well known persons that are involved, including Kohsuke Kawaguchi (the guy who drove JAXB 2) as well as the founders of Sonatypehttp://www.blogger.com/img/blank.gif, Tasktop, and Cloudbees. May be that it was caused by the front built between the opponents, consisting of an open source community and Oracle, a corporation that nowadays enjoys much more weight than it requires. Whatever.
One point that definitely interested me has been whether the respective projects would join a larger organization or not. As it currently looks, Jenkins has decided to stay independently and not join, for example, Apache. OTOH, Hudson will be moved to Eclipse. My expectation is that Jenkins will be better off with it's decision.
It's not that I'd vote against big organizations in general. For example, I believe that Subversion's move to Apache has been
a good choice. In that case, the benefits of having a big daddy will outweigh the disadvantages like the need to following certain policies that are largely driven by a bigger community and close-to-corporate culture. I haven't got any personal experiences with Eclipse, but I'd expect that both the benefits and the weak points will be comparable for Hudson.
From my point of view, the power of Hudson/Jenkins is the unusual multitude of plugins. Name any source control or build system, programming language, repository or CMS: Chances are excellent that you'll find one or even more plugins that support it. This is most likely due to the architecture, most likely borrowed from Eclipse, which has had a phenomenal success in this regard. Consequently, the more attractive Hudson or Jenkins can be for plugin developers, the more successful they will be.
But fine grained access rights, tight control over legal aspects of code that enters and well defined policies aren't exactly what a bunch of completely different plugin developers requires. In contrary, the lower the hurdles are for adding a new plugin or publishing a new plugin release, the more attractive.
I can very well imagine that Sonatype, in particular, will do an excellent Job in driving Hudson at Eclipse. They have demonstrated their exceptional abilities with Maven, Tycho, or Nexus. In the medium term, I'd expect Hudson to be more visually attractive, perhaps easier to use and possibly will have a cleaner and mroe agile core. (That's some things they are doing really well.) But they won't be able to create and maintain plugins for just everything. My guess is that Jenkins will take the lead in terms of extension points (that's the part of the core that's driven by plugin developers), number of plugins and hence applicability in different situations. May very well be that Hudson can be the bigger commercial success, but Jenkin's big enough to counter.
Whatever the outcome, it will be interesting to follow. :-)
One point that definitely interested me has been whether the respective projects would join a larger organization or not. As it currently looks, Jenkins has decided to stay independently and not join, for example, Apache. OTOH, Hudson will be moved to Eclipse. My expectation is that Jenkins will be better off with it's decision.
It's not that I'd vote against big organizations in general. For example, I believe that Subversion's move to Apache has been
a good choice. In that case, the benefits of having a big daddy will outweigh the disadvantages like the need to following certain policies that are largely driven by a bigger community and close-to-corporate culture. I haven't got any personal experiences with Eclipse, but I'd expect that both the benefits and the weak points will be comparable for Hudson.
From my point of view, the power of Hudson/Jenkins is the unusual multitude of plugins. Name any source control or build system, programming language, repository or CMS: Chances are excellent that you'll find one or even more plugins that support it. This is most likely due to the architecture, most likely borrowed from Eclipse, which has had a phenomenal success in this regard. Consequently, the more attractive Hudson or Jenkins can be for plugin developers, the more successful they will be.
But fine grained access rights, tight control over legal aspects of code that enters and well defined policies aren't exactly what a bunch of completely different plugin developers requires. In contrary, the lower the hurdles are for adding a new plugin or publishing a new plugin release, the more attractive.
I can very well imagine that Sonatype, in particular, will do an excellent Job in driving Hudson at Eclipse. They have demonstrated their exceptional abilities with Maven, Tycho, or Nexus. In the medium term, I'd expect Hudson to be more visually attractive, perhaps easier to use and possibly will have a cleaner and mroe agile core. (That's some things they are doing really well.) But they won't be able to create and maintain plugins for just everything. My guess is that Jenkins will take the lead in terms of extension points (that's the part of the core that's driven by plugin developers), number of plugins and hence applicability in different situations. May very well be that Hudson can be the bigger commercial success, but Jenkin's big enough to counter.
Whatever the outcome, it will be interesting to follow. :-)
Thursday, March 31, 2011
Standing on the shoulders of giants
In building our homegrown basic, we borrowed bits and pieces of our design from previous versions, a long-standing software tradition. Languages evolve; ideas blend together; in computer technology, we all stand on others’ shoulders.
Paul Allen, Microsoft co-founder, in Microsoft's Odd Couple, 2011
Paul Allen is also the owner of Interval Licensing, LLC, a company that is currently accusing AOL, Apple, eBay, Facebook, Google, Netflix, Office Depot, OfficeMax, Staples, Yahoo!, and YouTube (but not Microsoft) for violation of four almost ridiculous patents.
Friday, March 18, 2011
How to develop a project
I don't know how others do this, but having just experienced this for the n-th time in another different company (it feels the same in all big companies), it seems to deserve some notes.
In my experience, project development typically works like this:
In other words, let's face it: In particular in the first weeks, implementors are clearly overstrained. And there is few guidance: The consultants who wrote the specs, or at least most of them, have already been assigned to another project. I can't remember a case, where a specification writer has been part of the implementation team, with the exception of yours truly. Of course, they are available for questions. But the first technical decisions, apart from "we will have those 8 different modules running on 3 servers with application server X and OS Y (all choosen by the big companies inquisitio..., pardon, central IT department, except the third server, where we are forced to use application server, or OS Z)" (Usually written down in a document called "software architecture", together with the promise that the application will scale well, by "simply" using more than one instance of X and Y per server...) are usually made
Take that together with the fact that these first decisions will have heavy impact on the projects future.
Another matter is the teams structure. Ideally, a project would start with one or two, preferably good, programmers who lay the general architecture. In time, other programmers would come in, taking up what's there and with the possibility to learn quick by asking the initial. After some weeks, it would be possible to assign a dedicated field of work to a new programmer: Most API's basically fixed, at least dummy implementations of interfaces to other services, and so on. In other words, an environment where even an under-average programmer has a chance to do good work. Perhaps the specs might even be helpful at that time, because one of the initial programmers can tell you know exactly where to implement the stuff, which API's to use, and so on. The specs have become applicable.
I really wonder, whether things couldn't be different. Suggest the following:
Of course, the chances are, that a good part of this initial implementation will be thrown away later on. But, I'd bet that a good part of work could be taken over. Not to underestimate the amount of input
I believe, it is because they are much closer to my idea of a project when companies like Google, or Apple, can be innovative, and more innovative than others. Doing things is never the same than planning. Most likely, I'll never see that happening...
In my experience, project development typically works like this:
- A general target is set. ("Wouldn't it be cool, if we could do this and that automatically? Currently it involves a manual process, which goes over weeks. It could save a lot of work and we'd possibly have it done in minutes, or at least days?") The target is agreed upon, a sponsor is found who agrees to spend some money.
- A project team is built. The project team usually involves a project manager, business people from all affected departments, including operations (which will usually have nothing to do with the thing until and possibly even after the first months of productive use). Finally, there are the people who will actually be writing the documents on requirements, architecture, and whatever else seems to be required. Let's call them the consultants.
- The consultants are usually very clever people, or at least some of them are. In most cases there even is at least one so-called "software architect". As the term indicates, consultants are rarely internal staff. And if they are, they are rarely bound to a particular department, but hopping from project to project. Very cleaver people are rare and sought after everywhere. The sooner they can start a new project, the better. OTOH, this means that they don't have a deep knowledge of the business topics involved. In other words, a lot of discussion between the consultants and the business people will be required. The consultants have to learn to translate the different languages of the business people into their own terms and, not less important, they must translate their own terms back into a language that the business people understand. (The last 2 years, I have been working in a project, where the word "order" has at least 4 completely different technical and semantical meanings, depending on the context.)
- The specification evolves over time. Initially, it was expected that fixing the specification (including agreement from all affected departments) will take 6 months and another 6 months was expected for implementation. That means go-live in one year. Of course, given all the required discussions, changes, additions, and whatever, the specification won't take 6 months, but 9 months, one year, or even more. You're lucky, if the estimated time for implementation remains at 6 months: I've seen it happening that the additional time for specification was cat from the implementation time frame. No problem, because the specs are now so good, thanks to the additional time, that the implementors can save the same amount of time.
- Finally the specs are done. Let's assume that the estimated amount of work for implementation is 2 man-years. Now we can estimate the time that it takes quite easily: With 4 team members, it will take 6 months. But we don't have time. We'll have 6 team members, hence only 4 months, so we can keep our targets.
You think, this is funny? Just ask yourself: When did you see it the last time, that one or more additional team members were hired, if the project wasn't on schedule. - If we are lucky, the six team members are at least average programmers. It is rare, that a "very clever" one is included. Very clever people are too rare to stay in the implementation teams. They are pulled off to work at presales, as "software architects", or "business consultants". In bad cases, two or three of the team members are clearly below. Either that, or they come from a less technical world (main frame), have been working in a simpler environment for 20 years and are now exposed to a world where you work with 7 servers or worse, including frontend, backend, database, LDAP, and at least three different queues or other external services.
The team members have "clear assignments". After all, there are business requirements, formal specifications, and whatever you might ask for. But, of course, they are also external staff, or at least members of a different (IT) department. In other words, just as the consultants before, they need to learn and understand the business topics. In theory, you *can* read, understand, and memorize those thousand (or even more) pages of specs. In practice, you are expected to do this while already implementing. At least, I can't remember a projects GANTT diagram, where the first two weeks have been reserved for "reading".
In other words, let's face it: In particular in the first weeks, implementors are clearly overstrained. And there is few guidance: The consultants who wrote the specs, or at least most of them, have already been assigned to another project. I can't remember a case, where a specification writer has been part of the implementation team, with the exception of yours truly. Of course, they are available for questions. But the first technical decisions, apart from "we will have those 8 different modules running on 3 servers with application server X and OS Y (all choosen by the big companies inquisitio..., pardon, central IT department, except the third server, where we are forced to use application server, or OS Z)" (Usually written down in a document called "software architecture", together with the promise that the application will scale well, by "simply" using more than one instance of X and Y per server...) are usually made
- at a time where the schedule is already heavily pressing
- by people who haven't yet a deeper understanding of the project
- by people who aren't considered excellent
Take that together with the fact that these first decisions will have heavy impact on the projects future.
Another matter is the teams structure. Ideally, a project would start with one or two, preferably good, programmers who lay the general architecture. In time, other programmers would come in, taking up what's there and with the possibility to learn quick by asking the initial. After some weeks, it would be possible to assign a dedicated field of work to a new programmer: Most API's basically fixed, at least dummy implementations of interfaces to other services, and so on. In other words, an environment where even an under-average programmer has a chance to do good work. Perhaps the specs might even be helpful at that time, because one of the initial programmers can tell you know exactly where to implement the stuff, which API's to use, and so on. The specs have become applicable.
I really wonder, whether things couldn't be different. Suggest the following:
- Let's add one, or even two, very clever consultants to the specification team. Of course, that means that the sponsor's initial costs become bigger.
- The task of the additional people would be two-fold: 75% implementation, 25% following the specs. The latter means that they should participate in the most important meatings, follow the communications via mail, or whatever, and read the documents.
- Implementation means, at this time, to develop something that is as good as possible between a click-dummy, or PoC, and the real target.
- In case of a real implementation, the initial implementors ought to stay
in the project for at least 6 months.
Of course, the chances are, that a good part of this initial implementation will be thrown away later on. But, I'd bet that a good part of work could be taken over. Not to underestimate the amount of input
- from the technical side (the implementors) on the spec writers and
- from both spec writers and business team on the implementors
- the much greater momentum that the real implementation will have
- the better chance to have a good estimation of the implementations costs and schedule
I believe, it is because they are much closer to my idea of a project when companies like Google, or Apple, can be innovative, and more innovative than others. Doing things is never the same than planning. Most likely, I'll never see that happening...
Saturday, March 5, 2011
Sunday, February 13, 2011
Neill Gaiman on copyright and piracy
Good thing to note that one of your favourite authors gets it: http://www.youtube.com/watch?v=0Qkyt1wXNlI
Subscribe to:
Posts (Atom)