tag:blogger.com,1999:blog-81240284036260391952024-03-06T04:31:24.731+01:00Grumpy ApacheRantings of an aging, notorious coder.Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.comBlogger62125tag:blogger.com,1999:blog-8124028403626039195.post-64710856803800819002023-10-30T14:49:00.002+01:002023-10-30T14:49:15.578+01:00Fixing the default settings for a webMethods Integration Server<p> </p><p>Having installed a <a href="https://www.softwareag.com/en_corporate/platform/integration-apis/webmethods-integration.html">webMethods Integration Server</a>, there are a few defaults, that you would like to change. (If you know them.) The <i>you,</i> in that case, would be a person like me, a webMethods developer. In most cases, that's the first thing I do, after the server's first lauch. Let's discuss them.</p><p><br /></p><h2 style="text-align: left;">Extended Settings</h2><div><br /></div><div>In general, it's worth to know <b>all</b> the <a href="https://documentation.softwareag.com/webmethods/microservices_container/msc10-5/10-5_MSC_PIE_webhelp/index.html#page/integration-server-integrated-webhelp/to-configure_the_server_33.html">extended settings</a>. Unfortunately, there are too many of them to know even the majority. However, I found the following to be particularly relevant for me:</div><div><br /></div><div><ul style="text-align: left;"><li><span face="Roboto, Arial, sans-serif" style="font-size: 11px;"> </span><i>watt.server.compile</i>, and <i>watt.server.compile.unicode</i>: These manage, how to launch the Java compiler, if you are editing Java services. In general, the defaults (Something like <span style="background-color: #f9f9f9; color: #434343; font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace; font-size: 15.008px; white-space: pre;">javac -classpath {</span><span class="hljs-number" color="var(--hljs-number)" style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace; font-size: 15.008px; white-space: pre;">0</span><span style="background-color: #f9f9f9; color: #434343; font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace; font-size: 15.008px; white-space: pre;">} -d {</span><span class="hljs-number" color="var(--hljs-number)" style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace; font-size: 15.008px; white-space: pre;">1</span><span style="background-color: #f9f9f9; color: #434343; font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace; font-size: 15.008px; white-space: pre;">} {</span><span class="hljs-number" color="var(--hljs-number)" style="font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace; font-size: 15.008px; white-space: pre;">2</span><span style="background-color: #f9f9f9; color: #434343; font-family: Consolas, Menlo, Monaco, "Lucida Console", "Liberation Mono", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", "Courier New", monospace; font-size: 15.008px; white-space: pre;">}.) </span>are fine. However, note that the {0} is being replaced by the IS classpath. And, that can become <b>extremely</b> long. Too long for a Windows command line.<br />My definite recommendation would be, to clear these settings (empty string as the value). That will cause IS to use the <b>Java Compiler API</b>, instead of the command line. In that case, the length of the classpath doesn't matter.</li><li><i>watt.server.httplog</i>: By setting this to "common", you will get an additional log file <INSTANCE_DIR>/logs/http.log, which is basically like the Apache server's access log. This is important information, if you want to know, whether a client actually reaches IS, or not.</li><li><i>watt.server.ns.hideWmRoot</i>: If you ever walked through your integration servers packages directory, you might have noticed an unknown package, named <i>WmRoot</i>. If so, you might also have noticed, that this package is not in the list of packages, that the IS Administration UI, and Designer display to you. Well, here's why: By setting this variable to "false", that package becomes visible<br />Knowing that package is certainly not necessary. However, sooner or later in your career as a webMethods developer, you'll find yourself wondering "How does one do X?", where X is something, that you would usually do in the IS Administration UI.<br />Well, the answer to your question would most likely be something "The UI does this by invoking the service <i>wm.server.whatever:DoX</i> in the WmRoot package.<br /><b>Warning</b>: While it is common practice to use services in WmRoot, at least for tasks like "Is Y a valid package, or service name?", you should be aware, that they are <b>not</b> part of the public IS API. In other words: As soon as you start using such services, you are starting to pile up a migration risk for your next upgrade. You are warned!</li><li><i>watt.server.ns.lockingMode: </i>Set that to none. It's your development server, so no harm done. If you need to align with coworkers, use a proper version control system, aka Git.</li></ul></div>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-62248924164470623202023-10-30T14:46:00.002+01:002023-10-30T14:46:27.364+01:00The mess, that is business.<p> Typo of the day: "Busimess requirements". Should teach that to my f...ing auto correction. :-)</p><p><br /></p>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-81918502670737278512021-04-28T12:26:00.005+02:002021-04-28T12:26:49.419+02:00Installing native Docker on AlmaLinux<p> </p><p>Today, I had to reinstall a Docker host, which was previously running on my beloved CentOS 8. As that is quickly approaching it's end, I decided to give AlmaLinux a try. To use it as a Docker host, I had to install native Docker. (I prefer to use native Docker over the one, that is part of the distribution.) So, according to <a href="https://get.docker.com/">https://get.docker.com/</a>, this is supposed to work as follows:</p><p><br /></p><pre style="overflow-wrap: break-word; white-space: pre-wrap;"><span style="font-family: courier;"> curl -fsSL https://get.docker.com -o get-docker.sh</span></pre><pre style="overflow-wrap: break-word; white-space: pre-wrap;"><span style="font-family: courier;"> sudo sh get-docker.sh</span></pre><pre style="overflow-wrap: break-word; white-space: pre-wrap;"><br /></pre><p style="overflow-wrap: break-word; text-align: left; white-space: pre-wrap;">In the past, I did that on CentOS, quite a lot, so I expected it to work out of the box. Unfortunately, that didn't quite work, and failed with the error message below:</p><p style="overflow-wrap: break-word; text-align: left; white-space: pre-wrap;"><br /></p><pre style="overflow-wrap: break-word; white-space: pre-wrap;"><span style="font-family: courier;"> sudo sh get-docker.sh</span></pre><div style="overflow-wrap: break-word; text-align: left;"><span style="white-space: pre-wrap;"><span style="font-family: courier;"> # Executing docker install script, commit: 7cae5f8b0decc17d6571f9f52eb840fbc13b2737
ERROR: Unsupported distribution 'almalinux'
</span></span></div><div style="overflow-wrap: break-word; text-align: left;"><span style="white-space: pre-wrap;"><br /></span></div><div style="overflow-wrap: break-word; text-align: left;"><span style="white-space: pre-wrap;"><br /></span></div><div style="overflow-wrap: break-word; text-align: left;"><span style="white-space: pre-wrap;">In other words, the Docker installer isn't quite uptodate. Fortunately, fixing a shell script isn't that big of a problem, and I got it running by applying the following patch:</span></div><div style="overflow-wrap: break-word; text-align: left;"><span style="white-space: pre-wrap;"><br /></span></div><div style="overflow-wrap: break-word; text-align: left;"><span style="white-space: pre-wrap;"><span style="font-family: courier;">[jwi@gitjndhost ~]$ diff -ub get-docker-orig.sh get-docker.sh
--- get-docker-orig.sh 2021-04-28 12:10:12.477498011 +0200
+++ get-docker.sh 2021-04-28 12:03:23.300011495 +0200
@@ -342,7 +342,7 @@
esac
;;
- centos|rhel)
+ centos|rhel|almalinux)
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
@@ -427,8 +427,8 @@
echo_docker_as_nonroot
exit 0
;;
- centos|fedora|rhel)
- yum_repo="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE"
+ centos|fedora|rhel|almalinux)
+ yum_repo="$DOWNLOAD_URL/linux/centos/$REPO_FILE"
if ! curl -Ifs "$yum_repo" > /dev/null; then
echo "Error: Unable to curl repository file $yum_repo, is it valid?"
exit 1
</span></span></div><div><br /></div><p style="text-align: left;"><b>Note:</b> In order to overcome minor dependency conflicts with podman, buildah, and the like, I also had to issue the command</p><div><br /></div><div><span style="font-family: courier;"> sudo dnf -y remove runc</span></div><div><br /></div><div><br /></div><div><br /></div><p style="overflow-wrap: break-word; text-align: left; white-space: pre-wrap;"><br /></p><pre style="overflow-wrap: break-word; white-space: pre-wrap;"><br /></pre><pre style="overflow-wrap: break-word; white-space: pre-wrap;"><br /></pre><p> </p>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-47970149520715445632020-10-24T22:21:00.001+02:002020-10-24T22:21:11.314+02:00Incompatibility of Process.waitFor() between Linux, and Windows.<div><p> </p><p>Today, after maybe two years, I figured out, why one particular piece of code works excellent on Windows, but has been causing a lot of trouble on Linux. That piece of code is the class <a href="https://github.com/jochenw/afw/blob/master/afw-core/src/main/java/com/github/jochenw/afw/core/util/Executor.java" target="_blank">Executor </a>in my Java application framework library <a href="https://github.com/jochenw/afw" target="_blank">afw</a>.</p><p><br /></p><p>The reason turned out to be, that the method <a href="https://docs.oracle.com/javase/8/docs/api/java/lang/Process.html#waitFor--" target="_blank">Process.waitFor()</a> behaves differently on Windows, and Linux, with regard to the launched processes output.<br /></p><p><br /></p><ul style="text-align: left;"><li>On Windows, the waitFor() method waits, until</li><ol><li>the launched process has terminated, <b>and</b></li><li>the processes standard output, and error output has been consumed (in other words: The input streams, as returned by Process.getInputStream(), and Process.getErrorStream() have been read<b>.</b></li></ol></ul></div><div style="margin-left: 40px; text-align: left;">This behaviour can have the unexpected result, that the <a href="https://stackoverflow.com/questions/5483830/process-waitfor-never-returns" target="_blank">waitFor() method never returns</a>, which can easily be dealt with by simply launching two separate threads, that read those input streams.</div><div style="margin-left: 40px; text-align: left;"> </div><div style="text-align: left;"><ul style="text-align: left;"><li>On Linux, however</li><ol><li>The method waits, again, until the launched process has terminated, <b>but</b></li><li>It doesn't wait for the consumption of the processes output.</li></ol></ul> </div><div style="margin-left: 40px; text-align: left;">In other words: If you are interested in the processes output, then it is <b>not</b> sufficient, to invoke Process.waitFor(), because the expected output may arrive later on. Instead, you need to launch the same two threads, and then wait, until</div><div style="margin-left: 40px; text-align: left;"><ol style="text-align: left;"><li>both threads have received EOF from their respective input streams, and</li><li>the invocation of Process.waitFor() has returned.</li></ol></div><div style="margin-left: 40px; text-align: left;">That's a bit tricky, indeed.</div><div style="text-align: left;"> </div><div style="text-align: left;">In summary: Java is a highly portable platform. That being said, there might still be issues. Glad, that I could clarify this one.<br /></div><div style="text-align: left;"> <br /></div>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-7504034879207947972019-02-03T20:54:00.001+01:002019-02-03T21:03:28.772+01:00Announce: JSGen (Java Source Generation Framework)<br />
It is with some pride, that I can announce <a href="https://jochenw.github.io/jsgen" target="_blank">JSGen</a> today, as a new, and very minor, open source project, under the <a href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank">Apache Software License, 2.0</a><br />
<br />
<b>JSGen</b> is short for <b>J</b>ava <b>S</b>ource <b>Gen</b>eration framework. As the name (hopefully) suggests, it is a tool for generating Java source code. It is the result of close to 20 years working on, and with Java source generation. That includes, in particular, the <a href="https://sourceforge.net/projects/jaxme" target="_blank">XML data binding framework JaxMe</a>, its predecessor <a href="http://svn.apache.org/repos/asf/webservices/archive/jaxme/" target="_blank">Apache JaxMe</a>, and a lot of application programming in my professional work.<br />
<br />
It is my opinion, that<br />
<br />
<br />
<ol>
<li>Java source generators have a tendency to grow into CLOB's over time, becoming less maintainable, and understandable.</li>
<li>Java source generators do typically contain a real lot of boilerplate code, organizing things like import lists, and syntactical details, rather than their actual purpose.</li>
</ol>
<br />
<br />
In contrast, JSGen provides an object model, that aims to make source generation just yet another developers task, on which modern software engineering principles can be applied. JSGen will support you by<br />
<br />
<br />
<ul>
<li>Creating import lists semiautomatically (with the exception of static imports)</li>
<li>Supporting multiple formatting code styles (An Eclipse-like format, the default, and an Apache-Maven-like alternative format.) Switching between code styles is as easy as replacing one configuration object with another.</li>
<li>Enabling a more structured approach to source code generation. (Example: Implement the case of handling a single instance, and use that to handle the case of a collection.)</li>
</ul>
<br />
<br />
Let's have a look at an example, which is quoted from the JUnit tests. We intend to create a simple HelloWorld.java. Here's, how we would do that with JSGen.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"> JSGFactory factory = JSGFactory.create();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> Source jsb =</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> factory.newSource("com.foo.myapp.Main").makePublic();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> Method mainMethod =</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> jsb.newMethod("main").makePublic().makeStatic();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> mainMethod.parameter(JQName.STRING_ARRAY, "pArgs");</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> mainMethod.body()</span><span style="font-family: "courier new" , "courier" , monospace;">.line(System.class,</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> ".out.println(",</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> q("Hello, world!"), ");");</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> File targetJavaDir = new File("target/generated-sources/mysrc");</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> FileJavaSourceWriter fjsw = new</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> FileJavaSourceWriter(targetJavaDir);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> // You might prefer MAVEN_FORMATTER in the next line.</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> fjsw.setFormatter(DEFAULT_FORMATTER);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> fjsw.write(factory);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;">Some things I'd finally like to note:</span><br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<br />
<ol>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;">JSGen is a complete rewrite of two mature, and reliable predecessors, namely <a href="http://jaxme.sourceforge.net/JaxMeJS/docs/index.html" target="_blank">JaxMeJS</a>, and <a href="https://svn.apache.org/repos/asf/webservices/archive/jaxme/site/apidocs/org/apache/ws/jaxme/js/package-summary.html" target="_blank">Apache JaxMe JS</a>. As such, it is based on a mature API, and a real lot of applied experience. It may be new, but it should be quite reliable, nevertheless.</span></li>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;">It provides all the features, of the predecessors, but adds</span></li>
<ol>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;">Support for Generics</span></li>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;">Support for Annotations</span></li>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;">Switchable Code formatters</span></li>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;">Builders heavily used in the API</span></li>
</ol>
</ol>
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;">This announcement has intentionally been deferred until the point, where JSGen has succesfully been used for a professional project in my professional work. </span><br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><br /></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;">Finally, a few links:</span><br />
<br />
<br />
<ul>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;"><a href="https://jochenw.github.io/jsgen" target="_blank">Web site</a></span></li>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;"><a href="https://github.com/jochenw/jsgen" target="_blank">Github Project</a></span></li>
<li><span style="font-family: "arial" , "helvetica" , sans-serif;"><a href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank">License (ASL 2.0)</a></span></li>
<li><a href="https://github.com/jochenw/jsgen/issues" target="_blank">Bug Reports, Feedback, and other Issues</a></li>
</ul>
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"> </span>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-19529735042271033642016-02-24T10:31:00.003+01:002016-02-24T10:31:50.193+01:00Don't use Copy+Paste on Windows, if you've got a lot of data<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;">Windows never ceases to disappoint me. I've got an external hard drive, on which resides a 60 GB VM, which I need to copy now and then to emulate a snapshot. So far, I used the standard Windows file copy stuff to do that. (Copy, and paste the VM directory.) Ran with about 8-20 MB per second. Or in other words: Took up to two hours. And, worst of all, the procedure isn't reentrant. Interrupting the copying means to restart.</span><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;" /><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;" /><span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;">So, tried something different: <a href="http://www.cygwin.com/">CygWin's</a> <a href="https://de.wikipedia.org/wiki/Rsync">rsync</a>, the swiss army knife for backups, and related stuff. In other words, I am using the command</span><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;" /><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;" /><span style="font-family: Courier New, Courier, monospace;"><span style="background-color: white; color: #404040; font-size: 13px; line-height: 18.2px;"> rsync -a -r --progress <old_vm_dir> <new_vm_dir></new_vm_dir></old_vm_dir></span></span><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;" /><span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;">Runs with 30MB per second (50% faster than native Windows copying). And is reentrant...</span><br />
<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18.2px;"><br /></span>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-14028739573243829122015-11-05T09:04:00.000+01:002015-11-05T09:04:54.269+01:00VMWare Shared Folders oin Fedora 23<br />
Hi,<br />
<br />
if you haven't noticed: <a href="https://getfedora.org/">Fedora 23</a> is out, and its time to get your hands wet on it. Being forced to use a Windows Laptop, I am typically doing this by installing a VMware guest. Once the virtual machine is up, the first thing i want to do is accessing my Windows home directory. The easiest way to do this is by creating a so-called "Shared Folder" in the VMware settings for my machine. On the guest, you need to install the "VMware Tools" and do something like<br />
<br />
<pre>mkdir /home/username/sharedfolder
sudo mount -t vmhgfs -o uid=1000,gid=1000 .host:sharename /home/username/sharedfolder
</pre>
<br />
(The options uid=1000,gid=1000 ensure, that the mounted directory is readable, and writable for the user with uid=1000, and gid=1000, which is me.)<br />
<br />
The problem with that procedure is, that it depends on a kernel module called "hgfs", which must be installed as part of the "VMware Tools". And, needless to say: Installation of the VMware Tools (version 9.2.0-799703, as of this writing) fails, because the Kernel isn't compatible to the sources distributed by VMware.<br />
<br />
So far, the only reasonable solution was to wait for an updated tools version by VMware. (I generally ignored the possibility to <a href="https://github.com/rasa/vmware-tools-patches">patch those distributed sources</a> as overly complicated, and insecure. However, there's a new, and better solution available:<br />
<br />
Fedora 23 automatically installs an RPM named <a href="https://github.com/vmware/open-vm-tools">open-vm-tools</a>. And this includes two programs, that allow use of shared folders without the kernel module:<br />
<br />
<pre> # Display a list of all shared folders:</pre>
# Note, that it includes a share called "sharedfolder".<br />
$ vmware-hgfsclient
sharedfolder<br />
<br />
# Create a directory named /home/username/foo, and mount the shared folder there.<br />
$ vmhgfs-fuse .host:foo /home/username/sharedfolder<br />
$ cd /home/username/sharedfolder<br />
$ touch testfile<br />
$ rm testfile<br />
<br />
Note, that neither "sudo" nor the specification of any options was required.<br />
<br />
So, in other words: As of Fedora 23, the VMware Tools are no longer required. Mouse integration, Cut and Paste, and using shared folders: Everything works out of the box. (I can live without the thin print drivers.)<br />
<br />
<br />
<br />
<br />Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-67967814816785856982015-03-17T10:26:00.000+01:002015-03-17T10:27:58.064+01:00Honouring Terry Pratchett: Java Web Application<br />
<div>
<br /></div>
<div>
This Blogs name is <a href="http://grumpyapache.blogspot.de/">Grumpy Apache</a>. These days, there are excellent reasons, for being grumpy: After all, my favourite author <a href="http://www.lspace.org/about-terry/biography.html">Terry Pratchett</a> has died, way too early. But, as Terry wrote in <a href="http://wiki.lspace.org/mediawiki/Going_Postal">Going Postal</a> about John Dearheart:</div>
<div>
<br /></div>
<blockquote class="tr_bq">
<span style="background-color: white; color: #4f4f4f; font-family: verdana, arial, helvetica, sans-serif; font-size: 19.1692295074463px; line-height: 27.3846130371094px;">His name, however, continues to be sent in the so-called Overhead of the clacks. The full message is "GNU John Dearheart", where the G means, that the message should be passed on, the N means "Not Logged" and the U that it should be turned around at the end of the line. So as the name "John Dearheart" keeps going up and down the line, this tradition applies a kind of immortality as "a man is not dead while his name is still spoken".</span></blockquote>
This means, we'll be celebrating "Being childish day" today, by adding the HTTP Header<br />
<br />
X-Clacks-Overhead: GNU Terry Pratchett<br />
<br />
to our web sites. And, here's how to do that with any Java Web Application.<br />
<br />
Its simple. First of all, you'll be adding <a href="https://drive.google.com/file/d/0B5t2kUl4hdZrOUxSdlROeE01Yk0/view?usp=sharing">this class</a> to your web application. It is a so-called servlet filter. I'll quote the relevant method here:<br />
<br />
<blockquote class="tr_bq">
<span class="Apple-tab-span" style="white-space: pre;"> </span>public void doFilter(ServletRequest pReq, ServletResponse pRes,<br />
FilterChain pChain) throws IOException,<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>ServletException {<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>if (pRes instanceof HttpServletResponse) {<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>final HttpServletResponse res = (HttpServletResponse) pRes;<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>res.addHeader("X-Clacks-Overhead", "GNU-Terry-Pratchett");<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>}<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>pChain.doFilter(pReq, pRes);<br />
<span class="Apple-tab-span" style="white-space: pre;"> </span>}</blockquote>
<div>
Besides, add the following snippets to your web.xml:</div>
<div>
<br /></div>
<div>
<blockquote class="tr_bq">
<filter><br /> <filter-name>ClacksOverheadFilter</filter-name><br /> <filter-class>com.github.jochenw.clacksov.ClacksOverheadFilter</filter-class><br /> </filter><br />
<filter-mapping><br /> <filter-name>ClacksOverheadFilter</filter-name><br /> <url-pattern>*</url-pattern><br /> </filter-mapping></blockquote>
</div>
<div>
And, that's it! No modification of servlets, or the like, just a simple addition, that you can make to any web application.</div>
<div>
<br /></div>
<div>
Keep in mind:</div>
<div>
<br /></div>
<div>
<b>As long as we are shifting his name on the Internet, Terry isn't dead.</b></div>
<div>
<br /></div>
<br />
<br />Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0Schillerstraße 34, 72800 Eningen unter Achalm, Germany48.4864626 9.254371499999933822.964428100000003 -32.054222500000066 74.0084971 50.562965499999933tag:blogger.com,1999:blog-8124028403626039195.post-90312751420420566642014-06-17T14:14:00.002+02:002014-06-17T14:17:06.055+02:00Installing CentOS 7 Prerelease on VMWare<br />
Hi,<br />
<br />
if you haven't heard the news: A prerelease of CentOS 7 is out. This is important, because:
<br />
<ol>
<li><a href="http://seven.centos.org/">CentOS 7</a> is a major release
and will be the base of the Linux Distro that people like me will be
using in the next years on servers. (Yes, I <b>do</b> know about Ubuntu 14.04 LTS, OpenSUSE Whatever, Debian Something, etc. However, that is most likely not what <b>I</b> will be using. Logically, so won't do <b>people like me</b>. End of discussion.)</li>
<li>Quite a few things have changed since CentOS 6. In particular, much has been adopted from recent Fedora versions: </li>
<ol>
<li> The new Anaconda Installer. (I am personally not overly happy with it.
The old one worked quite well for me, but I had my share of trouble with
the new one. In particular, I am less than enthusiastic about how Disk
Partitioning works nowadays. OTOH, this version of Anaconda (the one
distributed with the CentOS 7 Prerelease) is a step forward in that
aspect. Perhaps, more <b>people like me</b> had similar trouble.)</li>
<li> GNOME 3: Well, this one will definitely be the cause of a major uproar on the <a href="http://www.dedoimedo.com/computers/linux-world-map-reloaded.html">Red Hat Continent</a>. I readily admit that I was one of the people who initially went with <a href="http://mate-desktop.org/">MATE</a>
as a GNOME 2 replacement, so as to avoid GNOME 3. However, in the
meantime, I've learned to live with it and can even appreciate some
features like the enhanced keyboard control. The one thing I am still
missing is the pictures screenblanker, though. I learned to live with
xscreensaver, although this still smells like a very ugly hack.) Like
it, or not, <b>people like me</b> (c) will have to face it.</li>
</ol>
<li> This prerelease was published not even one weak after the release of <a href="http://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux">RHEL 7</a>. Compare that to the months we had with some minor versions of CentOS 6.
So, we benefit from <a href="http://www.redhat.com/about/news/press-archive/2014/1/red-hat-and-centos-join-forces?utm_source=rss&utm_medium=rss&utm_campaign=red-hat-and-the-centos-project-join-forces-to-speed-open-source-innovation">Red Hat adopting CentOS</a>. Good news!</li>
</ol>
So, what is this posting about?<br />
<br />
<ul>
<li> I won't cover generic aspects of installing CentOS, or Fedora. I'll
assume that you have installed either of which before and have a rough
idea of what I am talking about. In particular, I assume that you know
what a network installation is, because right now this is the only
installation method available through an ISO image. (Forget "Live DVD",
or whatever else you have hoped for.)</li>
<li>I will, hovever, concentrate on installing this very special
prerelease version, because it is not quite like installing an official
version. (Neither is it overly complex, though.) Hopefully, I'll also
cover what has changed since version 6.</li>
</ul>
If that is interesting for you: Read on. If not: I am sorry! Google
(Planet Apache, or whatever else brought you here, did wrong and is to
blame).<br />
<br />
So, what's to do?<br />
Download the ISO Image from <a href="http://buildlogs.centos.org/centos/7/os/x86_64-20140614/images/boot.iso">http://buildlogs.centos.org/centos/7/os/x86_64-20140614/images/boot.iso</a> and save it, for example as "centos7-netinstall.iso".<br />
<ol>
<li>Create a new VM (My Parameters were "I will install the operating
system later.", "Guest operating system=Linux", Version="CentOS 64-bit",
Maximum disk size=30GB, Memory=3072GB. Everything else was as suggested
by <a href="https://www.vmware.com/support/player60/doc/player-602-release-notes.html">VMWare Player 6.0.2 build-1744117</a>.</li>
<li>Select Virtual Machine Settings, CD/DVD (IDE). Enable "Connect at
power on" and "Use ISO Image file". Select the file you downloaded in
step 1.</li>
<li>Start up the created VM. From the boot menu, select "Install CentOS
7". (You may as well test the media, but you did check the MD5 Sum
anyways, did you? :-) At least, you know the difference... (Remember
that "won't cover generic aspects" above?)</li>
<li>Hopefully, the Anaconda graphical installer will come up. (At least,
it does so on a VMWare machine. I'd never hope so on a machine with an
NVIDIA or AMD graphics card. Don't expect me to help you with that
crap. I'm all with <a href="http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0CC0QtwIwAQ&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DiYWzMvlj2RQ&ei=mP2fU6LSAcPb7Abz4YGAAw&usg=AFQjCNFsJTI0qtFEe9erDXJ12Wsd7TPiRA&sig2=zOvTxndrLIa-pA2Yb9FF8Q&bvm=bv.68911936,d.ZGU">Linus</a> on that. :-)</li>
<li>Select your language (Safe choice is, of course,"English-US").</li>
<li>Anaconda will notify that this is prerelease, unstable software. You knew that anyways, so click on "I want to proceed."</li>
<li>The Anaconda "Installation Summary" screen will come up. This will
be an unknown thing (Remember: New Anaconda) for a lot of people, so
here's a screenshot:<br /><br /><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNMRtWt6BaKBWPUa5CNXy40YPXq3AWHrdGGqsON4Z4d8gJ8k1bEcrFSftf9pBsRqMNwRorYxal-q4U076lEvQ1YdnORq6EW-bJQKbQocAEnvdcQEj_smctnd4i_bC7HkeO-FKcO_HYSs8/s1600/Anaconda.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNMRtWt6BaKBWPUa5CNXy40YPXq3AWHrdGGqsON4Z4d8gJ8k1bEcrFSftf9pBsRqMNwRorYxal-q4U076lEvQ1YdnORq6EW-bJQKbQocAEnvdcQEj_smctnd4i_bC7HkeO-FKcO_HYSs8/s1600/Anaconda.png" height="248" width="320" /></a></div>
The important thing to keep in mind is the order of the following steps.
</li>
<li> Start with the Keyboard. (You're likely to use that in the
following steps.) Click on "Keyboard" (Not the small keyboard icon, but
the big icon, or the word.) Click on the "+" sign, and select your
favourite keyboard layout. (In my case "German, Germany, Eliminate dead
keys".) Remove any unwanted layout by clicking on it, and clicking on
the "-" sign. Finish by clicking on "Done" in the upper left corner.
(Who the heck came up with that? Anyways, remember the location.)</li>
<li>The next thing you're gonna need is the network. (Most likely, you
are currently "Not Connected".) Click on "Network & Hostname".
Click on "Off" in the upper right corner to enable networking. Enter a
meaningful host name. (I choose "c7wm96.mcjwi01.eur.ad.sag". Avoid
"localhost.localdomain".) Click on "Done". (Upper left corner,
remember?)</li>
<li>Now we can edit "Date & Time", aka time zone. I choose "Europe/Berlin".</li>
<li>If you need that (You don't, really...), click on "Language Support" and select additional languages. </li>
<li>The most obvious trap is the "Installation Source" (Hopefully, it
won't be in the official releases, which will select an URL
automatically): Click on that, enable "On the network", and enter the
URL <a href="http://buildlogs.centos.org/centos/7/os/x86_64-20140614/">buildlogs.centos.org/centos/7/os/x86_64-20140614/</a>.
If you need to use an HTTP Proxy, click on "Proxy setup". Enter your
proxy host name and port (in my case "httpprox.hq.sag:8080") Click on
"Add". Click on "Done". Wait a few seconds until you see "Downloading
package metadata", or the like. If you do see something like "Error
setting up Base Repository", changes are that the URL is wrong. Fix it,
and retry. Wait a few seconds more until downloading the package data
and checking for dependencies has finished.</li>
<li>Next, go to "Software Selection". The default is "Minimal Install".
This is fine, if you are happy with a server that has no X11 enabled. I
choose "Server with GUI" instead, to make my colleagues happy. On the
right hand side, you can choose to have KDE installed addizionally.
(AFAIK, no support for MATE, Cinnamon, LXDE, whatever. No idea, whether
that will come.) You might wish to deselect LibreOffice, if you manage
to do that. Click on "Done". Wait a few seconds until the message
"Checking for software dependencies" disappears.</li>
<li>Another, somewhat difficult step is the "Installation Destination".
Click on that. If you need "Custom Partitioning", enable "I will
configure partitioning." below. (The default is "Automatically configure
partitioning.", The presence of this option is what has changed since
Fedora 20, and I consider this to be a major improvement.) Click on
"Done", even if you're actually not. If the window for "Manual
Partitioning" appears, select your desired partition type ("Standard
Partition", "BTRFS", "LVM") and add a few partitions by clicking on the
"+" button. I create the following partitions (in that order):</li>
<ol>
<li>/boot with a Capacity of 500MB.</li>
<li>Swap with a Capacity of 6GB. (I need that much, because the Oracle
Installer wants 8GB of physical memory, but accepts Swap as a
replecement.)</li>
<li>/ with a Capacity of 8 GB.</li>
<li>/home with a Capacity of 16.21GB</li>
</ol>
Click on "Done". Click on "Accept Changes". If no error messages can be seen on the "Installation Summary" screen, then you have mastered the major hurdles.</ol>
<ul>
<li>Click on "Begin Installation".</li>
<li>Regardless of the ongoing installation, click on "Root Password". Enter a meaningful, and secure, root password. Repeat it. Click on "Done". (You never even considered to enter a weak password, did you? Well, if you did: Click on "Done" twice. :-)</li>
<li>The installation is still ongoing. Click on "User Creation". Enter a real name and a login name, enable "Make this user administrator" (The option will actually add the created user to the "wheel" group, which has permissions to use "sudo"). Enter a password and repeat it. Click on "Done" twice. (Oops, your passsword is secure: Then once is sufficient.)</li>
<li>Keep in mind that this is a "network installation": Anaconda will download each and every single RPM to install (In my case about 1200.), so the process will take time. OTOH, with a fast network (DSL, or something like that) it won't take much longer than installation from a DVD.</li>
<li>Once the actual installation is finished, you'll be asked for a reboot. Confirm that, and the new system comes up. Almost done. One minor step to perform: Accept the GPL license, and accept another reboot. (No, this isn't Windows, but still....)</li>
</ul>
If you got this far, then you've got a system running CentOS 7 Prerelease. Congratulations. Unfortunately, one thing is still left. Your system doesn't have a valid Yum configuration. (Convince yourself by running
<br />
<pre> sudo yum repolist all
</pre>
Oops, you need a terminal window to do that. That's no problem if you are running KDE or any other desktop that you are used to. If it's GNOME 3, and you are not, here's what to do:
Press, and release, the "Windows" key. (No, this is still <b>not</b> Windows, but anyways. If it helps, call it the "Linux" key.) Press, and release, the following keys, in that order: "t", "e", "r", "m", and Enter. At that point, a GNOME Terminal window should appear. (Or, in theory, any other desktop application containing the word "term". However, you had no chance to install "xterm" do far. :-)
Using the command
<br />
<pre> sudo yum vi /etc/yum.repos.d/centos7-prerelease.repo
</pre>
create a new file with the following contents:
<br />
<pre> [centos7-prerelease]
name=CentOS 7 Prerelease
url=http://buildlogs.centos.org/centos/7/os/x86_64-20140614/
enabled=1
priority=1
gpgcheck=0
</pre>
And now (I am not avoiding any flame wars today :-) you can do
<br />
<pre> sudo yum install emacs emacs-nox gcc make binutils kernel-headers
</pre>
A final note on the VMware tools: Anaconda did automatically install "open-vm-tools-desktop". So, mouse integration, copy and paste, etc. worked immediately for me. No need for a seprate installation.
Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com2tag:blogger.com,1999:blog-8124028403626039195.post-59558198546746141692014-05-07T23:28:00.000+02:002014-05-16T14:01:28.304+02:00Build System Performance on Windows
Over the last three months I had the pleasure to run Fedora 20 Linux on the Laptop I am using for work. Last week, I was forced to downgrade to Windows 7. (Mainly, because my employers system administrators don't support everything else. I am quite ready to have the occasional fight for my freedom against the admins, but I won't accept the constant struggle. To name just the most important problem: Accessing an MS Exchange Server without IMAP enabled is, at best, exhausting.)
Why the word "downgrade"? Because my machine is so much slower now. I am a developer. My Eclipse is open for 10 hours a day and I can't count the number of invocations of Ant, Maven, Make, and other build systems. (Ant, and Maven, being my personal favourites.) Of course, the machine isn't actually slower. It is the same hardware, after all. Same amount of RAM, still without an SSD. However, and that's a fact: <b>Running one and the same build system against the same project on Windows 7 takes more time than doing just that on Linux.</b>
If you don't believe me, try the following: Install a Linux VM on your Windows PC. Then run the following command, first on the VM, then on the Windows host:
<pre>git checkout https://github.com/torvalds/linux.git</pre>
What are the odds, that this command will run faster on the Linux VM than on the Windows hosts. I'd bet. And I'd win. (It's true: Linux Git on the emulated hardware wins against Windows Git on the raw iron.) Btw, for an even more convincing example, try "git svn checkout".)
This week, I decided to waste some time to think about the issue: How do I get my build system on Windows as fast as on Linux. First, let's identify the guilty party: It's none other than... (drum roll) NTFS!
I'm not making this up: Others are quite aware of the problem. See, for example, <a href="http://superuser.com/questions/15192/bad-ntfs-performance">this page</a>. A Google search for "ntfs performance many small files" returns about 168000 hits. So, let's state this as a fact: <b>NTFS behaves extremely poor when dealing with lots of small files.</b>
But that's exactly, what a build system is all about. Let's take a typical example:
<ol>
<li>The first typical step is to remove a build directory (like "target", or "bin", or whatever you name it.)</li>
<li<The build system searches for source files in the source directory, let's call it "src", "src/main/java", or whatever.</li>
<li>The compiler reads a lot of small source files (named *.java, *.c, or whatever) from the "src" directory.</li>
<li>For any such file, the compiler creates a corresponding, translated file (named *.class, or *.o, or whatever) in the build directory.</li>
<li>A packager, or linker, like "jar", or "ln" combines all these files we have just created into a single target file.</li>
</ol>
Notice something? This is the same for all build systems. It really doesn't matter, whether your build script uses XML, a DSL, JSON, or a binary format. (No, this is holy war won't have my participation.) What matters is this: All current build systems are based on the mantra of an output directory, where lots of small files are created. But, that's not a necessity. So, here's the challenge:
Let's modify our build systems in a manner that replaces the output directory with a "virtual file system". If we do it right, we can be much, much faster.
As a poof of concept, I wrote a small Java program, that extracts the Linux Kernel sources (aka the file "linux-3.14.2.tar.gz") and writes them into implemantations of the following interface:
<pre>
public interface IVFS {
OutputStream createFile(String pPath) throws IOException;
void close() throws IOException;
}
</pre>
For any source file (45941 files) the method createFile is invoked, the file is copied intoo the OutputStream, and the stream is closed. Finally, the method IVFS.close() is invoked. Here's my programs output:
<pre>
Linux Kernel Extraction, NullVFS: 4159
Linux Kernel Extraction, SimpleVFS: 1740044
Linux Kernel Extraction, MapVFS: 78134
</pre>
The three implementations are:
<ol>
<li>The NullVFS inherits the idea of /dev/null: It is basically a write-only target. Of course, this isn't really useful. On the other hand, it shows how fast we could be, in theory, if our target were arbitrarily fast: In this case 4159 milliseconds. (This is, mainly, the time for reading the Linux Kernel sources.)</li>
<li>The SimpleVFS is basically, what we have now. Files are actually created. As expected, this is really slow, and it takes more than 1740 seconds.</li>
<li>Finally, the MapVFS is basically an In-Memory store. However, it might be really useful, because its close method is creating a big file with the actual contents on disk. With 78 seconds, this implementation is still close to the NullVFS. It demonstrates what might be really possible.</li>
</ol>
Conclusion: When creating one file with our actual contents, we need 78 seconds, as opposed to 1740 seconds. Of course, the IVFS interface is an oversimplification. The implementations certainly aren't thread safe. We have omitted the possibility to modify files that have previousöy been created. But the numbers are so impressive that I am personally convinced: If we a) modify our build system to use a virtual file system as the output and b) provide fast implementations, then we have much to gain, fellow developers!
In practice, this won't be so easy. The biggest hurdle I am anticipating, is the Java Compiler. Even the Java Compiler API (aka the interface javax.tools.JavaCompiler) is based on real files: We won't be able to use the Java Compiler, as it is now. Instead, we have to manipulate them to use the VFS. <a href="http://www.eclipse.org/jdt/core/">ECJ, the Eclipse Java Compiler</a> might be our best option for that.
Who'll take the first step? Well, <a href="http://www.gradle.org/">Gradlers</a>, <a href="http://buildr.apache.org/">Buildrs</a>, <a href="http://en.wikipedia.org/wiki/SCons">SConsers</a>, of the world: Here's something where your users could have a <b>real</b> difference!
Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com4tag:blogger.com,1999:blog-8124028403626039195.post-65524383410304668902014-05-01T17:26:00.000+02:002014-05-01T17:26:21.412+02:00The sins of our fathers"Fathers shall not be put to death for their sons, nor shall sons be put to death for their fathers; everyone shall be put to death for his own sin."
(Deuteronomy 24:16)
But, of course, we are paying for our fathers sins. Not so much our biological fathers or ancestors, but our predecessors.
In my case, this is what's happened today: I wrote a very small Java program that extracts the Linux Kernel sources (More on the reasons and background, hopefully, in my next posting. Suffice it for now, that I'm not rewriting "tar xzf". I'm not <b>that</b> stupid! I had a good reasons.
Now, the Kernel Sources are containing in particular, a small file named "aux.c". And my own program threw a FileNotFoundException when creating that file. Reproducible!
The error message was, of course, meaningless, so I began to start thining about all kinds of reasons:
<ol>
<li>Permissions, either those of the file itself, or the containing directory. Mo, the permissions were just fine!</li>
<li>Length of the path name. Actually, the full path name contained quite some characters, but still far away from the 256 that I am aware of.</li>
<li>Too many open files. No, I have had my share of beginners faults and was properly closing.</li>
</ol>
Any other ideas. I guess you don't get this one: Some JDK programmer was actually implementing a check for aux.*, nul.*, prt.* etc when creating a file, because these file names where in fact a problem with Windows in the past. Of course, the sensible solution would have been:
<ol>
<li>Wait for the error message from Windows.</li>
<li>Check the file name.</li>
<li>Throw a meaningful error message that explains the problem.</li>
</ol>
That way, veything would have worked fine, if the unthinkable happened: Windows eliminates that stsupid restriction. Because that was exactly what happened. There is now problem with creating that file. Convince yourself:
<pre>
$ touch aux.c
jwi@MCJWI01 /c/Users/jwi/workspace/afw-vfs
$ ls -al aux.c
-rw-r--r--+ 1 jwi Domain Users 0 May 1 16:02 aux.c
jwi@MCJWI01 /c/Users/jwi/workspace/afw-vfs
</pre>
So, our JDK programmer has managed to move the problem with the "aux.c" file name from Windows to the JDK. Thanks, a lot!
Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-35493914248280926552013-09-10T11:42:00.002+02:002013-09-10T11:44:17.723+02:00Installing Obsolete Java JDK versions on Fedora LinuxAs a Java developer, one is frequently forced to use obsolete, or even deprecated, Java versions. So I came to the necessity to install Java 6 on Fedota 19. The problem: In the Fedora 19 repositories, there's only Java 7 and 8. Convince yourself:<br />
<br />
<pre>$ sudo yum list | grep openjdk
java-1.6.0-openjdk.x86_64 1:1.6.0.0-59.1.10.3.fc16 installed
java-1.6.0-openjdk-devel.x86_64 1:1.6.0.0-59.1.10.3.fc16 installed
java-1.6.0-openjdk-javadoc.x86_64 1:1.6.0.0-59.1.10.3.fc16 installed
java-1.7.0-openjdk.x86_64 1:1.7.0.60-2.4.2.0.fc19 @updates
java-1.7.0-openjdk-demo.x86_64 1:1.7.0.60-2.4.2.0.fc19 @updates
java-1.7.0-openjdk-devel.x86_64 1:1.7.0.60-2.4.2.0.fc19 @updates
java-1.7.0-openjdk-javadoc.noarch 1:1.7.0.60-2.4.2.0.fc19 @updates
java-1.7.0-openjdk-src.x86_64 1:1.7.0.60-2.4.2.0.fc19 @updates
java-1.7.0-openjdk-accessibility.x86_64
java-1.8.0-openjdk.i686 1:1.8.0.0-0.9.b89.fc19 updates
java-1.8.0-openjdk.x86_64 1:1.8.0.0-0.9.b89.fc19 updates
java-1.8.0-openjdk-demo.x86_64 1:1.8.0.0-0.9.b89.fc19 updates
java-1.8.0-openjdk-devel.i686 1:1.8.0.0-0.9.b89.fc19 updates
java-1.8.0-openjdk-devel.x86_64 1:1.8.0.0-0.9.b89.fc19 updates
java-1.8.0-openjdk-javadoc.noarch 1:1.8.0.0-0.9.b89.fc19 updates
java-1.8.0-openjdk-src.x86_64 1:1.8.0.0-0.9.b89.fc19 updates
</pre>
The same goes for Fedora 18 and 17, btw. (I'll skip the output here. Note, that processing these commands will take some time, as yum will download the complete repository metadata for the respective version.
<br />
<pre>$ sudo yum --releasever=17 list | grep openjdk
$ sudo yum --releasever=18 list | grep openjdk
</pre>
However, Java 6 is available for Fedora 16!
<pre>
$ export http_proxy=MY_PROXY_URL, for example http://my.proxy.server:8080
$ wget wget http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/16/Fedora/x86_64/os/Packages/java-1.6.0-openjdk-1.6.0.0-59.1.10.3.fc16.x86_64.rpm
$ wget http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/16/Fedora/x86_64/os/Packages/java-1.6.0-openjdk-devel-1.6.0.0-59.1.10.3.fc16.x86_64.rpm
$ wget http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/16/Fedora/x86_64/os/Packages/java-1.6.0-openjdk-javadoc-1.6.0.0-59.1.10.3.fc16.x86_64.rpm
</pre>
Now, my first (and preferred) attempt to install these would be
<pre>
$ sudo yum localinstall --obsoletes java-1.6.0-openjdk*
</pre>
which fails, due to the following error message:
<pre>
error: Failed dependencies:
java-1.6.0-openjdk is obsoleted by (installed) java-1.7.0-openjdk-1:1.7.0.60-2.4.2.0.fc19.x86_64
java-1.6.0-openjdk-devel is obsoleted by (installed) java-1.7.0-openjdk-1:1.7.0.60-2.4.2.0.fc19.x86_64
java-1.6.0-openjdk-javadoc is obsoleted by (installed) java-1.7.0-openjdk-1:1.7.0.60-2.4.2.0.fc19.x86_64
</pre>
(Please contact me, if you have an idea on how to get rid of these!) Fortunately, there's another possibility, which does the job quite neatly:
<pre>
$ sudo rpm --nodeps -i java-1.6.0-openjdk*
</pre>
If you're an Eclipse user, the JDK can now be found in /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/
Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com1tag:blogger.com,1999:blog-8124028403626039195.post-24193686161192427582013-05-03T09:19:00.000+02:002013-05-07T14:38:38.236+02:00Slow Startup of Cygwin BashWhen on Windows, I never use another terminal/shell than <a href="http://http://code.google.com/p/mintty/">MinTTY</a>/<a href="http://www.cygwin.com">CygWin</a> Bash. So I was heavily harmed by a problem that started quite some time ago: Suddenly, when I opened MinTTY, it took 10 seconds or so, before the bash prompt became visible. Today, I finally discovered the culprit by reading another post.
As you posssibly know, there is a directory /etc/profile.d containing scripts that are executed when a login shell is starting. Now, one of these scripts, called <i>bash_completion.sh</i> is extremely slow. You can try for yourself:
<pre>
$ time . /etc/profile.d/bash_completion.sh
real 0m8.908s
user 0m1.402s
sys 0m7.310s
</pre>
In other words, solving the issue for me was as simple as renaming this script:
<pre>
$ mv /etc/profile.d/bash_completion.sh /etc/profile.d/bash_completion.sh.disabled
</pre>
Voila! My MinTTY opens immediately again.
<b>Update:</b> The above time command is only slow when the script is being executed for the first time. In other words, if your bash was starting slow due to executing it, then you might see a result like this:
<pre>
$ time . /etc/profile.d/bash_completion.sh
real 0m0.000s
user 0m0.000s
sys 0m0.000s
</pre>
Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com3tag:blogger.com,1999:blog-8124028403626039195.post-80118485698662583602012-11-23T12:11:00.001+01:002012-11-23T12:11:24.802+01:00RfC: Improving Mavens PerformanceI am typically working in projects that are relatively complex, like one parent projects and 20 modules, or so. To handle the complexity, I have learned to use and appreciate <a href="http://maven.apache.org/">Maven</a>. OTOH, after 8 years or so with Maven, I am still missing some aspects of <a href="http://ant.apache.org/">Ant</a> builds, in particular the speed. Maven does a good job when it comes to understand Build scripts (biggest problem of Ant), but it can be painfully slow. Why is that?
I could name several reason, but the most obvious seems to be that Maven is always building the whole project, whereas Ant allows to implement logic like<br />
<br />
<pre> if (module.isUpToDate()) {
// Build it
} else {
// Ignore it
</pre>
Of course, Ant's syntax is completely different, but that's not the point, unless you are a fanatic XML hater and really believe that a Groovy or JSON syntax is faster by definition (If so, stop reading, you picked up the wrong posting!)<br />
The absence of such an uptodate check isn't necessarily a problem. Most Maven plugins are nowadays implementing an uptodate check for themselves. OTOH, if every plugin does
an uptodate check and the module is possibly made up of other modules itself, then it sums up.<br />
Apart from that, uptodate checks can be unnecessarily slow. Suggest the following situation,
which I have quite frequently:<br />
A module contains an XML schema. JAXB is used to create Java classes from the schema
If the schema is complex, then the module might easily have severeal thousand Java
source files.<br />
This means, that the Compiler plugin needs to check the timestamps of several thousand Java and .class files, before it can detect that it is uptodate. Likewise, the Jar
Plugin will check the same thousands of .class files and compare it against the jar file, before building it.<br />
That's sad, because we could have a very easy and quick uptodate check by comparing the time stamps of the XML schema, and the pom file (it does affect the build, does it) with that of the jar file. If we notice that the jar file is uptodate with regard to the other two, then we might ignore the module at all: Ignore it would mean to completely remove it from the reactor and not invoke the Compiler or Jar plugins at all.
Okay, that would help, but how do we achieve that without breaking the complete logic of Maven? Well, here's my proposal:
<br />
<ol>
<li>Introduce a new lifecycle phase into Maven, which comes before everything else.
(Let's call it "init". In other words, a typical Maven lifecycle would be
"init, validate, compile, test, package, integration-test, verify, install, deploy" (see <a href="http://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html">this</a> document, if you need to learn about these phases.</li>
<li>Create a new project property called "uptodate" with a default value of false (upwards compatibility).</li>
<li>Create a new Maven plugin called "maven-init-plugin" with a configuration like
<pre> groupid: org.apache.maven.plugins
artifactId: artifactid>="maven-init-plugin"
configuration:
sourceResources:
sourceResource:
directory: src/main/schema
includes:
include: **/*.xsd
sourceResource:
directory: .
includes:
include: pom.xml
targetResources: ${project.build.directory}
includes:
include: *.jar
(Excuse the crude syntax, I have no idea how to dixplay XML on blogspot.com!
I hope, you do get the idea, though.)
The plugins purpose would be to perform an uptodate check by comparing source-
and target resources and set th "uptodate" flag accordingly.
</pre>
</li>
</ol>
<br />
<br />
<li>Modify the Maven core as follows: After the "init" phase, search for modules
with <code>isUptodate() == true</code> and remove those modules from the reactor.
Then run the other lifecycle phases.
</li>
That's it. Perfectly upwards compatible. Moderate changes. Much faster builds. How about that?Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com6tag:blogger.com,1999:blog-8124028403626039195.post-41942844102375177502012-11-16T14:49:00.000+01:002012-11-16T14:49:26.748+01:00DB2 WeirdnessIn the year 2012, what serious database might require code like this:
<pre>
private ResultSet getColumns(DatabaseMetaData pMetaData,
String pCat,
String pSchema,
String pTableName)
throws SQLException {
if (pMetaData.getDatabaseProductName().startsWith("DB2")) {
final String q = "SELECT null, TABSCHEMA, TABNAME, COLNAME,"
+ " CASE TYPENAME"
+ " WHEN 'BIGINT' THEN -5"
+ " WHEN 'BLOB' THEN 2004"
+ " WHEN 'CHARACTER' THEN 1"
+ " WHEN 'DATE' THEN 91"
+ " WHEN 'INTEGER' THEN 5"
+ " WHEN 'SMALLINT' THEN 4"
+ " WHEN 'TIMESTAMP' THEN 93"
+ " WHEN 'VARCHAR' THEN 12"
+ " WHEN 'XML' THEN -1"
+ " ELSE NULL"
+ " END, TYPENAME, LENGTH FROM SYSCAT.COLUMNS"
+ " WHERE TABSCHEMA=? AND TABNAME=?";
final PreparedStatement stmt =
pMetaData.getConnection().prepareStatement(q);
stmt.setString(1, pSchema);
stmt.setString(2, pTableName);
return stmt.executeQuery();
} else {
return pMetaData.getColumns(pCat, pSchema, pTableName, null);
}
}
</pre>
or this:
<pre>
private ResultSet getExportedKeys(DatabaseMetaData pMetaData)
throws SQLException {
if (pMetaData.getDatabaseProductName().startsWith("DB2")) {
final String q = "SELECT null, TABSCHEMA, TABNAME,"
+ " PK_COLNAMES, null, REFTABSCHEMA, REFTABNAME,"
+ " FK_COLNAMES, COLCOUNT FROM SYSCAT.REFERENCES"
+ " WHERE TABSCHEMA=? OR REFTABSCHEMA=?";
final PreparedStatement stmt =
pMetaData.getConnection().prepareStatement(q);
stmt.setString(1, "EKFADM");
stmt.setString(2, "EKFADM");
return stmt.executeQuery();
} else {
return pMetaData.getExportedKeys(null, "EKFADM", null);
}
}
</pre>
Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-18970038994068827382012-10-18T10:45:00.000+02:002012-10-18T20:07:14.090+02:00BPM Process Migration<h2>
BPM Process Migration</h2>
Having worked in several BPM projects for quite some time, I usually enjoy the help of the BPM server. In particular, BPMN etc. are excellent for conversations with the customers. Of course, you still need to translate the customers desires into your own technical picture (which might differ considerably), but in the end you'll likely to get something that gives the customer a "I know this" feeling, which is worth a lot. Of course, there are still gaps, problems, and all that stuff. However, what really sucks, are upgrades of the project version.<br />
<br />
<b>Disclaimer</b>: I am no BPM expert, much less skilled in the theory, just an experienced user. This is just the result of my thinkings. In particular, don't mismatch this post with a statement of my employer, <a href="http://www.softwareag.com/">Software AG</a>, or <a href="http://www.fujitsu.com/">Fujitsu</a>. It reflects my impression of how to work with the <a href="http://www.softwareag.com/corporate/products/wm/application_integration/integration_server/overview/default.asp">webMethods BPM Server</a>, or the <a href="http://www.fujitsu.com/global/services/software/interstage/">Fujitsu Interstage Server.</a> I have no idea how these ideas can be transferred to other BPM tools like, for example, <a href="http://servicemix.apache.org/bpm.html">Apache ServiceMix</a>, or whatever.<br />
<br />
<h3>
Terminology</h3>
<br />
A <u>BPM Process Model</u> in the sense of this posting is a set of <u>Process Nodes</u> and a set of transitions between these nodes. In what follows, let PM be a process model, PN be the set of PM's nodes and TPN the set of transitions. PN consists two special subsets, the start nodes (SPN), and the end nodes (EPN). A process model typically reflects some kind of workflow can be graphically visualized (see, for example, this picture:<br />
<br />
<a href="http://fearnoproject.files.wordpress.com/2011/01/bmp-example.png?w=373&h=207" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="177" src="http://fearnoproject.files.wordpress.com/2011/01/bmp-example.png?w=373&h=207" width="320" /></a>
<br />
<br />
<br />
<br />
The possibility of graphical visualization is what's so attractive about BPM for non-technical folk.)<br />
A <u>BPM Process State</u> is an element from a universe U, typically an unstructured set of named objects. In the case of Interstage BPM, these named objects are strings, in the case of webMethods BPM these objects can be complex (maps, arrays, etc.: the webMethods <a href="http://en.wikipedia.org/wiki/WebMethods_Flow"><u>Pipeline</u></a>):<br />
<br />
A <u>BPM Process Instance</u> is an element of the set PN x U: A combination of a process node and a process state. This definition is too geeneral, of course. For example, the node PN must be reachable from a start node via a series of transitions out of TPN. However, for now we can ignore this.<br />
A BPM Process Model can have multiple versions. These versions are usually related, for example, the sets of process nodes and transitions are frequently subsets. In general, however, they can be completely unrelated.<br />
A BPM Process Migration involves<br />
<ol>
<li>the creation of one or more new process models, or model versions.</li>
<li>Possibly the removal of existing process models and process instances.</li>
<li>Possibly a migration of process instances from one, or more process models to a new process model, typically a new version of their current model.</li>
</ol>
This last part is the one that sucks, because it is completely unsupported. The developers are completely left alone. (All you can do is to ensure some kind of compatibilty, which usually implies leaving old software versions, or at least parts thereof, in place and hoping, that old and new versions are working fine together.)<br />
<br />
But, how could such tools look like? This is what my post is about:<br />
<ul>
<li>It should be possible to replace process models with new versions by migrating the process instances.</li>
<li>This means that a developer ought to be able to specify a mapping from the set PN1 x U to PN2 x U. (The mapping would usually be a Java class implementing a special interface.)</li>
</ul>
Example: A process state usually contains entries like these:<br />
ID (Database or any other internal ID, for example an incoming order; the process specific details are stored elsewhere and not as part of the pipeline, which would otherwise grow too big. However, the details are easily accessible.)<br />
State (A human readably process state, like "unconfirmed", "available", or "acknowledge".)<br />
Names (for example, name of the orderer, etc. These are frequently not really required, but redundant and just copied from the details for the sake of convenience.)<br />
<br />
A new process version might introduce a new ID (for example, from another external system, which is now connected to the processs), a new state, or something like that. In order to get the existing process instances working with the new model, we can either<br />
- modify the process so that it does support null values, even if they are mandatory from a business perspective <br />
- enhance the process state by adding these new values as part of the migration.<br />
Guess, which I'd prefer? And, guess which we are left with now?<br />
<br />
<br />Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-72161059811569366362012-08-09T09:16:00.000+02:002012-08-09T09:16:02.327+02:00Maven and property filesAfter so many years (since 2004, indeed when the first version of Maven 2 was still in development), I am still learning new stuff every day. For example, so far I was always specifying properties in my POM file. But you <b>can </b>use external property files! There is a <a href="http://mojo.codehaus.org/properties-maven-plugin/">Maven Properties Plugin</a> over at <a href="http://mojo.codehaus.org/">Mojo</a> with a goal "properties:read-project-properties".<br />Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com2tag:blogger.com,1999:blog-8124028403626039195.post-84081541345319579252012-08-08T16:03:00.000+02:002012-08-09T09:22:51.270+02:00Maven is groovy!Recently, I had another one of those cases where Maven <b>almost</b> does the right thing, but not quite. Let me explain the use case:<br />
I've got a software component that can initialize the database from an SQL script. Such an SQL script (in what follows: The DDL, or data definition language script) is ideally generated by the <a href="http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/toolsetguide.html#toolsetguide-s1-7">Hibernate Schema Exporter</a>, aka "hbm2ddl", which in turn is available in Maven by running the <a href="http://mojo.codehaus.org/maven-hibernate3/hibernate3-maven-plugin/hbm2ddl-mojo.html">Hibernate3 Maven Plugin</a>. But, if just creating the database is not sufficient and you need to run a second SQL script (in what follows: The data script) to populate the DB with some initial entries? Well, I came up with the following solution:<br />
<ol>
<li>At build time, have Maven create the DDL script (below target/classes, so that it is available at run time)</li>
<li>At development time, manually create the data script (in src/main/db)</li>
<li>At build time, have Maven concatenate these scripts into a third SQL script (in what follows: The concatenated script, also below target/classes, as it must also be available at runtime)</li>
</ol>
Question: How do we do that last step? The most obvious solution was the <a href="http://maven.apache.org/plugins/maven-antrun-plugin/">Maven Antrun Plugin, </a>Ant even's got a <a href="http://ant.apache.org/manual/Tasks/concat.html">"concat" tas</a>k, which should do exactly what I want (Including uptodate checks). However, I wasn't really happy with that solution, because Ant, or the "concat" task behaved too unpredictable (For example, no error was produced, if either of the source files didn't exist. An, error checking is, where Ant scripts become really nasty.) In the end, I had to admit: It didn't work.<br />
So I came up with another idea: Why not have a small Groovy Script in the Maven POM. And, as is usually the case, someone else already had that idea and there is a <a href="http://docs.codehaus.org/display/GMAVEN/Executing+Groovy+Code">Maven Plugin</a>, which already provides just that:<br />
I can embed a Groovy snippet into my Maven POM and have it executed at a suitable point of my build script. Here's the snippet I came up with:<br />
<blockquote>
<plugin><br>
<groupId>org.codehaus.gmaven</groupId><br>
<artifactId>gmaven-plugin</artifactId><br>
<version>1.4</version><br>
<executions><br>
<execution><br>
<phase>prepare-package</phase><br>
<goals><br>
<goal>execute</goal><br>
</goals><br>
<configuration><br>
<source><![CDATA[<br>
def concat(s1, s2, t) {<br>
def java.io.File f1 = new java.io.File(s1)<br>
def java.io.File f2 = new java.io.File(s2)<br>
def java.io.File ft = new java.io.File(t)<br>
def long l1 = f1.lastModified()<br>
def long l2 = f2.lastModified()<br>
def long lt = ft.lastModified()<br>
if (l1 == 0) {<br>
throw new IllegalStateException("Source file must exist:" + f1);<br>
} else if (l2 == 0) {<br>
throw new IllegalStateException("Source file must exist:" + f2); <br>
} else if (lt == 0 || l1 > lt || l2 > lt) {<br>
java.io.File pd = ft.getParentFile()<br>
if (pd != null && !pd.isDirectory() && !pd.mkdirs()) {<br>
throw new IOException("Unable to create parent directory: " + pd)<br>
}<br>
println("Creating target file: " + ft)<br>
println("Source1 = " + f1)<br>
println("Source2 = " + f2)<br>
java.io.FileInputStream fi1 = new java.io.FileInputStream(f1)<br>
java.io.FileInputStream fi2 = new java.io.FileInputStream(f2)<br>
ft.append(fi1)<br>
ft.append(fi2)<br>
fi1.close()<br>
fi2.close()<br>
} else {<br>
println("Target file is uptodate: " + ft)<br>
println("Source1 = " + f1)<br>
println("Source2 = " + f2)<br>
}<br>
}<br>
concat("target/classes/com/softwareag/de/s/framework/demo/db/derby/initZero.sql",<br>
"src/main/db/init0.sql",<br>
"target/classes/com/softwareag/de/s/framework/demo/db/hsqldb/init0.sql")<br>
concat("target/classes/com/softwareag/de/s/framework/demo/db/derby/initZero.sql",<br>
"src/main/db/init0.sql",<br>
"target/classes/com/softwareag/de/s/framework/demo/db/hsqldb/init0.sql")<br>
]]></source><br>
</configuration><br>
</execution><br>
</executions><br>
</plugin><br>
</blockquote>
<br />
perhaps in combination with a byte array, for performance reasons, but in Groovy a file has got a method append(InputStream), which does exactly that. And, although I am declaring the variable ft above as an instance of java.io.File, it is nevertheless a Groovy file, with all the <a href="http://groovy.codehaus.org/groovy-jdk/java/io/File.html">added sugar of Groovy</a>! Which is, why embedding Groovy into the POM is much nicer than embedding Java!<br />
<br />
In the future. I will most likely never ever write Maven plugins and use Groovy scripts instead.<br />
<br />
Second: We are inside a Maven POM, or, to put it different: Inside an XML file. As a consequence, I've got to be careful with characters like '&', or '!'. Which is why I am using the strings ">" and "&" instead. I might as well use a CDATA section, or, even better: An external script (in src/main/groovy) However, I believed to make this postings point better with an internal (albeit somewhat lengthy) snippet. Hope, you agree, so let's be groovy!<br />
<br />
<br />Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com1tag:blogger.com,1999:blog-8124028403626039195.post-42746733778130767302011-11-05T19:36:00.015+01:002011-11-06T19:44:14.894+01:00ICULast weekend, I had the unexpected opportunity to participate in the nightly snoring contest held at the <a href="http://www.medizin.uni-tuebingen.de/index.php?id=21364&site=ukt&lang=de">intensive care unit (ICU) of the neurological clinic, university of Tübingen</a>. Such a chance comes once in a lifetime, so I could not miss it. Here's how it went:<br /><br />My wife certainly considered me to be the odds-on favorite. But, alas, even wifes can overestimate their husband: Bed 1 (the contents of which have been yours truly) lost by far. At aboUT 21:00, bed 3 opened with a sonorous snore of about 80 decibel (about enough to be heard in a disco) and immediately took the lead. But even such an awesome competitor had to give in: During the night, bed 2 never ceased to impress with staccati of four to five 70-decibel-snores in a row, taking the first price with him.<br /><br />Every morning there a friendly female woke me, one of the doctors, who asked to take my blood and apologized in so many words for waking me. When that was done, she continued to do the same at the other beds, effectively waking all of us.<br /><br />On the last morning there (sunday) I prepared a little speech for her, which I could never hold, because I was moved from the ICU to a normal bed in the night. So, I am trying to do it here and now:<br /><br />I don't know whether any of my readers has ever spent a night in an ICU bed. It's deeply depressing. The only thing to look at are the bubbles in the bottles over you, which are pooring liquids in your veins, or the monitor, which is showing your blood pressure, heart beat, and stuff like that. With three apoplectic strokes in a row behind you (I promise to stop counting in public now. My inner self is a different matter.) there isn't much to expect or even hope for. Forget about sleep: There is a continuous background noise. Light is never completely lit and every five minutes some machine is beeping alarm, ideally on another bed, but from time to time it's at your own. (Usually, because you turned yourself to the other side.) Think of a Jura coffee machine that requests service to imagine the sound. In my worst moment, the nurse saw fit to blow oxygene in my nose because I seemed to be loosing. (Usually an indication of a heart that no longer works properly, fortunately not in this case.)<br /><br />After such a night, waking up is a gift! I'm still alive! I can kiss my wife today. With a bit of luck, I can hod our daughter. I can enjoy the smell of coffee. (Something, I couldn't do in the last months even if<br />I had coffee. But, it works again!) So, don't apologize, Dr., you're more than welcome. I can't tell, whether the other gentlemen share my feelings, but I'll be glad to give a few centiliters of blood, if I can have this day in exchange!Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-31103564220466129442011-11-05T19:19:00.004+01:002011-11-05T19:32:02.335+01:00Still there, world!This is not the end. But, let's face it: This (or any future post) might very well be my last. (In fact, last saturday I'd have been surprised about the additional week that I had since then. So, it seems to be ino order to prepare. So, how can a coder like me leave the world in grace? Like this:<br /><br /><blockquote><br />#include <stdio.h><br /><br />main()<br />{<br />printf("Good Bye, World \n");<br />}<br /></blockquote><br /><br />Sadly, I'm no wizard. For Dennis Ritchie, this would have been<br /><br /><br /><blockquote><br />#include <stdio.h><br /><br />main()<br />{<br />printf("GOOD BYE, WORLD!\n");<br />}<br /></blockquote><br /><br />(One of <a href="http://en.wikipedia.org/wiki/Death_%28Discworld%29">deaths</a> silly jokes. That'd be style!)<br /><br />But, for now and me, the only proper thing seems to be:<br /><br /><blockquote><br />#include <stdio.h><br /><br />main()<br />{<br />printf("Still there, World \n");<br />}<br /></blockquote>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com1tag:blogger.com,1999:blog-8124028403626039195.post-49586493810338741942011-09-19T17:04:00.005+02:002011-09-19T17:42:57.560+02:00A clash of generationsYesterday was a relatively minor election in Germany, more precisely in Berlin, in its roles as german capital and as one of the german federal states. The most remarkable thing about that election was this: The german <a href="http://en.wikipedia.org/wiki/Pirate_party">pirate party</a> (<a href="http://en.wikipedia.org/wiki/Piratenpartei_Deutschland">direct link in german</a>) got no less than 9% of the votes and 15 seats in Berlins state parliament. (Luckily, they didn't get more because they didn't have more candidates. In other words, additional votes would likely have been lost....)<br /><br />The reactions fro the mainstream media are remarkably similar to those responding to the first successes of the <a href="http://en.wikipedia.org/wiki/Green_party">german green party</a> a (<a href="http://de.wikipedia.org/wiki/B%C3%BCndnis_90/Die_Gr%C3%BCnen">direct link in german</a>) about 35 years ago, along the lines of "This could only happen in a city-state, like Berlin, not in a territorial state." (I should mention that just this year Germany got its first greeen prime minister in a federal state, <a href="http://de.wikipedia.org/wiki/Baden-W%C3%BCrttemberg">Baden Württemberg</a>, which is a territorial state. Another typical reaction: "The accountability of being in parliament will quickly dissolve voters illusions.", expecting that the result will be quite different after the next election."<br /><br />I believe, what most of these responders don't get is that the pirate party is driven by a <span style="font-weight:bold;">clash of generations</span>. They won't go away so quickly, if at all.<br /><br />The pirates voters are mostly people below 40 years. That's exactly the generation that was raised with, or even in, the Internet. To them, the the Internet provides value. It's important. Things like "Vorratsdatenspeicherung" (<a href="http://en.wikipedia.org/wiki/Telecommunications_data_retention">telecommunications data retention</a>) real name policies, various degrees of censorahip (regardless of the alleged reason: terroriam, child pornography, nazism, not to mention political grounds (Iran, China, Northern Korea) orcopyright violations) are threatening this value. Threatening something important that is.<br /><br />Take, on the other hand, the elder generation. The internet isn't important to them. It's a toy, that their children or grandchildren are playing with. A real lot of them are even considering a threat. (I remember some politicians assuming that the recent terror attacks in Norway wouldn't have happened withut the Internet. Similar voices can be heard after each and any amok run. Guess the age of such politicians.) They are quixk to call for exactly those things that the younger generation perceives as a threat. To the elders, it's the cure.<br /><br />To me, that's the same situation that we had when the green party was founded. Our generation considered the protection of the environment as important, our parents and grandparents considered it as a thread (mostly economical). The greens didn't go away. There time came when our generations and those of our children outnumbered our anchestors. I believe the time of the piraztes (or whoever follows them, should they break apart) Perhaps we'll have the first pirate prime minister in another 30 years?Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-55253053625568074722011-08-28T00:27:00.009+02:002011-08-29T12:19:13.688+02:00The mess that is m2e connectors<ul><li>Warning: The following is most likely stubborn, unreasonable,one-sided, and ignores a lot of facts, of which I am unaware.</li></ul>I am an <a href="http://www.eclipse.org/">Eclipse</a> user since approximately 10 years. I am also a user of <a href="http://maven.apache.org/">Maven 2</a> since may be 4-5 years. As a consequence I am also using the former <a href="http://m2eclipse.sonatype.org/">M2Eclipse</a> (an Eclipse Plugin for using Maven as the <a href="http://http//www.eclipse.org/articles/Article-Builders/builders.html">Eclipse Builder</a>) since its early days. While I am quite happy with Eclipse and Maven, the experience with M2Eclipse has never been free from annoyance and fights. To be honest, it has always been hard to convince my colleagues to use it. And I fear that battle is lost now. I wasn't the one loosing it: The decisive strike was administrated by M2E itself...
<br />
<br />M2Eclipse has recently been moved to the Eclipse Foundation. It is now called M2E, <a href="http://wiki.eclipse.org/M2E_Extension_Development">lives at eclipse.org</a> and can be installed from within Eclipse Indigo as a standard plugin, just like CDT, or WTP, which is, of course, a good thing.
<br />
<br />So, when Indigo was published (together with M2E 1.0 as a simultaneous release), I rushed to load it down in the hope of a better user experience. But the first thing I noted was: M2E was showing errors in practically every POM I have ever written, and there are quite a few of them, including those of several Apache projects and those at work. So,as a first warning:
<br />
<br /><span style="font-weight:bold;">M2E 1.0 is incompatible with its predecessors. If you want to carry on using it without problems, don't upgrade to Indigo, or try using an older version of M2Eclipse with it (I haven't tried, whether that works.</span> The reason for this intentional incompatibility (!) are the so-calles M2E connectors, which, I am sure, have driven a lot of people to madness since their invention. In what follows I'll try to outline my understanding of what the connectors are and why I do consider them a real, bloody mess.
<br />
<br />I am still not completely sure, what problem the connectors ought to solve, but from my past experiences I guess something like this:
<br />
<br />M2E allows you to run Maven manually. You can invoke a goal like "mvn install" from within Eclipse just as you would do it from the command line. That works (and always worked) just fine. Unfortunately, Maven is also invoked automagically from M2E whenever Eclipse builds the project, for example after a clean. In such cases M2E acts as an "Eclipse Builder". It is these latter invocations that people have always had problems with and that the connectors should handle better. First of all, what are these problems?
<br />
<br /><ol><li>Builders can be invoked quite frequently. If automatic builds are enabled and you are saving after every 10 keys pressed, the builders can be invoked every 20 seconds or so.</li><li>The UI is mainly locked while a builder is running. In conjunction with the frequent invocation that means that the UI can be locked 80% of the time, which a human developer considers extremely painful, in particular, if the builder invokes Maven, which can take quite some time.</li><li>Some Maven (I am unaware of any in reality, but the M2E developers are mentioning this quite frequently)plugins assume that they are invoked from the command line. That means, in particular, that System.exit is called once Maven is done. Consequently, they consider use of resources as unproblematic: They acquire a lot, including memory and don't release it properly. The resources are released automatically by System.exit. But that doesn't work in M2E which runs as long as Eclipse does (meaning the whole day for Joe Average Developer) and invokes Maven (and the plugin with it) again and again.</li><li>M2E doesn't know whether a plugin (or, more precisely, a plugins goal) should run as part of an automatic build. For example, source and resource generators typicallyshould, artifact generetors typically should not. Consequently, a lot of unnnecessary plugins are invoked by the automatic build, slowing bdown the builder even more, while necessary goals are not. This is not what people expect and leads to invalid behaviour on the side of the developer. For exsmple, I keep telling my colleagues again and again that they shouldd invoke Maven manually, if the test suite depends on a generated property file.</li></ol>But how should connectors fix this: I am partially speculating here, but my impression is this: <span style="font-weight: bold;">When Maven is invoked as a builder, then it is modified by M2E to no longer invoke plugins directly. Instead, Maven invokes the plugins connector, which in turn invokes the plugin.</span> The connector ought to know the plugin and decide, whether a goal must be invoked as part of the automatic build process or not. But that means, in particular:
<br />
<br /><span style="font-weight: bold;">M2E can invoke a plugin as part of the automatic build process if, and only if, there is a connector for the plugin, or you specially configure the plugin.</span> (More on that configuration later on.)
<br />
<br />And that is the main problem we are currently facing: Connectors are missing for a lot of important plugins, for example the JAXB plugins, the JavaCC plugins, the antrun plugin, and so on. The philosophy of the M2E developers seems to be that time will cure this problem, which is why they are mainly ignoring it.See, for example,
<br />bug 350414, bug 347521, bug 350810, bug 350811, bug 352494, bug 350299, and so on. Since my first attempts with Indigo, I am unaware of any new connectors, although the lack of them is currently the biggest issue that most people have with M2E. Try a Google search for "m2e mailing list connector", if you don't believe me.
<br />
<br />But even, if the developers were right, they choose to completely ignore another problem: <span style="font-weight:bold;">You can no longer use your own plugins in the Eclipse automatic builds, unless you create a connector for the plugin, or create a project-specific configuration.</span> (Again, more on that confuguration in due time.)
<br />
<br />At this point, one might argue: If you have written a plugin, it shouldn't be too difficult or too much work to write a connector as well. I'LL handle that aspect below.
<br />
<br />First of all, regarding the configuration: Absent a suitable connector, there is currently only one possibility to use a plugin as part of the automatic build: You need to add a plugin-specific configuration snippet ike the following to your POM:
<br />
<br /><blockquote>
<br /> <plugin>
<br /> <groupid>org.eclipse.m2e</groupid>
<br /> <artifactid>lifecycle-mapping</artifactid>
<br /> <version>1.0.0</version>
<br /> <configuration>
<br /> <ifecyclemappingmetadata>
<br /> <pluginexecutions>
<br /> <pluginexecution>
<br /> <pluginexecutionfilter>
<br /> <groupid>org.codehaus.mojo</groupid>
<br /> <artifactid>javacc-maven-plugin</artifactid>
<br /> <versionrange>[2.6,)</versionrange>
<br /> <goals>
<br /> <goal>javacc</goal>
<br /> </goals>
<br /> </pluginexecutionfilter>
<br /> <action>
<br /> <execute></execute>
<br /> </action>
<br /> </pluginexecution>
<br /> </pluginexecutions>
<br /> </lifecyclemappingmetadata>
<br /> </configuration>
<br /> </plugin>
<br /></blockquote>
<br />
<br />Neat, isn't it? And so short! This would advice M2E that I want the javacc-maven-plugin to run as a part of the automatic M2E build.
<br />
<br />So far, I have tried to be as unbiased as posssible, but now to the points that drive me sick. (As if that were currently required...)
<br />
<br /><ul><li>The space required for the M2E configuration typically exceeds the actual plugin configuration by far! If there ever was a good example of POM pollution, here's a better one.</li><li>M2E insists in the presence of such configuration, regardless of whether I want the plugin to run or not.If it is missing, then the automatic builder won't work at all. There is no default handling, as was present in previous versions of M2E. (I won't discuss what the default should be, I'd just like to have any.)</li><li>The M2E configuration must be stored in the POM, or any parent POM. There is no other possibility, like the Eclipse preferences or some file in .settings. In other words, if you are using IDEA or NetBeans, but there is a single project member using Eclipse, you still have to enjoy the M2E configuration in the POM. As bug 350414 shows, there are a real lot of people who consider this, at best, ugly.</li><li>I tried to play nice and start creating connectors. But this simply didn't work: I am a Maven developer, not an Eclipse developer. And a connector is an Eclipse plugin. I'm not interested in writing Eclipse plugins. (Which Maven developer is?) But there is nothing like a template peoject or the like, only <a href="http://http://wiki.eclipse.org/M2E_Extension_Development">this</a> well meant Wiki article, which doesn't help too much. For example, it assumes the use of <a href="http://http//tycho.sonatype.org/">Tycho</a>, which only serves to make Eclipse programming even more complicated.</li><li>The design of the connectors looks broken to me. Have a look at the <a href="http://http//git.eclipse.org/c/m2e/m2e-core.git/tree/org.eclipse.m2e.jdt/src/org/eclipse/m2e/jdt/internal/AbstractJavaProjectConfigurator.java">AbstractJavaProjectConfigurator</a>, which seems to be the typical superclass of a connector: It contains methods for configuring the Maven classpath, for adding source folders, for creating a list of files that have been created (or must be refreshed): <span style="font-weight: bold;">These are all things that are directly duplicating the work of the Maven plugin and should be left to the Maven plugin, or Maven, alone.</span> In other words:</li><li>Circumventing the Maven plugin is bad. Deciding whether to run or not should be left to the plugin, or Maven. (See, for example, the comment on "short-cutting" code generation on the <a href="http://short-cuts%20code%20generation/">Wiki page on writing connectors</a>
<br />.)</li></ul>
<br />To sum it all up:
<br />
<br />I fail to see why we can't throw away the whole connector mess and replace it with a configurable Maven goal that should be run by the automatic build ? There is even a reasoable default: "mvn generate-resources". Let's reiterate the reasons for inventing connectors from above and compare it with this solution:
<br />
<br /><ol><li>Maven wouldn 't be invoked more frequently</li><li>If a single Maven execution takes too long, fix the plugins that don't do a good job at detecting whether they can short-cut. Ant still does a better job here, years after the invention of Maven 2.</li><li>If some plugins don't behave well with regard to resources, fix'em. If we can wait months or years for connectors, we might as well wait for bug fixes in plugins.</li><li>The question whether to run a plugin or not can be left to the Maven lifecycle, if we choose a lifecycle goal like "generate-resources" Maven knows perfectly well the plugins and goals to include or exclude.</li><li>
<br /></li></ol>Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com12tag:blogger.com,1999:blog-8124028403626039195.post-50544858070353746362011-08-19T22:54:00.003+02:002011-08-19T23:04:26.510+02:00Alive - and kickingFor more serious matters: Last tuesday I was struck by a left sided apoplexy. The good news: I am alive. (Obviously) I am at home, having left the hospital today. Using the keyboard is still very difficult, though (Excuse for any typos ...) Need to get this better over the next weeks to become ready for the job ...
<br />Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com1tag:blogger.com,1999:blog-8124028403626039195.post-21058727537595163582011-07-25T08:36:00.001+02:002011-07-25T08:40:05.610+02:00Closing the ticketQuoting from a support ticket:<br /><br /><blockquote><br />As we are still reducing the cost of operations in the IT department, we are currently working on a limited number of service requests. As a consequence, we are unable to work on your ticket. Thanks very much for your understanding.<br /></blockquote><br /><br />I won't name the company. (And, just to make sure: No, it wasn't my employer.)Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com0tag:blogger.com,1999:blog-8124028403626039195.post-46193348285867692082011-05-22T15:42:00.005+02:002011-05-22T16:59:49.271+02:00Why Jenkins is better off as an independent organizationOne thing that has definitely moved me this year is the development around <a href="http://jenkins-ci.org/">Jenkins</a> / <a href="http://java.net/projects/hudshttp://www.blogger.com/img/blank.gifon/">Hudson</a>. I never even used either (although I am quite sure that I will during the 20 years of my remaining professional live), so I cannot even tell why it was moving me, but I definitely followed with real concern. May be, that it was due to the well known persons that are involved, including <a href="http://www.kohsuke.org/">Kohsuke Kawaguchi</a> (the guy who drove <a href="http://jaxb.java.net/">JAXB 2</a>) as well as the founders of <a href="http://www.sonatype.com/">Sonatype</a>http://www.blogger.com/img/blank.gif, <a href="http://tasktop.com/">Tasktop</a>, and <a href="http://www.cloudbees.com/">Cloudbees</a>. May be that it was caused by the front built between the opponents, consisting of an open source community and Oracle, a corporation that nowadays enjoys much more weight than it requires. Whatever.<br /><br />One point that definitely interested me has been whether the respective projects would join a larger organization or not. As it currently looks, Jenkins has decided to stay independently and not join, for example, <a href="http://www.apache.org">Apache</a>. OTOH, Hudson will be moved to <a href="http://www.eclipse.org">Eclipse</a>. My expectation is that Jenkins will be better off with it's decision.<br /><br />It's not that I'd vote against big organizations in general. For example, I believe that <a href="http://subversion.apache.org/">Subversion</a>'s move to Apache has been<br />a good choice. In that case, the benefits of having a big daddy will outweigh the disadvantages like the need to following certain policies that are largely driven by a bigger community and close-to-corporate culture. I haven't got any personal experiences with Eclipse, but I'd expect that both the benefits and the weak points will be comparable for Hudson.<br /><br />From my point of view, the power of Hudson/Jenkins is the unusual multitude of plugins. Name any source control or build system, programming language, repository or CMS: Chances are excellent that you'll find one or even more plugins that support it. This is most likely due to the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Extension+points">architecture</a>, most likely borrowed from Eclipse, which has had a phenomenal success in this regard. Consequently, the more attractive Hudson or Jenkins can be for plugin developers, the more successful they will be.<br /><br />But fine grained access rights, tight control over legal aspects of code that enters and well defined policies aren't exactly what a bunch of completely different plugin developers requires. In contrary, the lower the hurdles are for adding a new plugin or publishing a new plugin release, the more attractive.<br /><br />I can very well imagine that Sonatype, in particular, will do an excellent Job in driving Hudson at Eclipse. They have demonstrated their exceptional abilities with <a href="http://maven.apache.org">Maven</a>, <a href="http://tycho.sonatype.org/">Tycho</a>, or <a href="http://nexus.sonatype.org/">Nexus</a>. In the medium term, I'd expect Hudson to be more visually attractive, perhaps easier to use and possibly will have a cleaner and mroe agile core. (That's some things they are doing really well.) But they won't be able to create and maintain plugins for just everything. My guess is that Jenkins will take the lead in terms of extension points (that's the part of the core that's driven by plugin developers), number of plugins and hence applicability in different situations. May very well be that Hudson can be the bigger commercial success, but Jenkin's big enough to counter.<br /><br />Whatever the outcome, it will be interesting to follow. :-)Jochen Wiedmannhttp://www.blogger.com/profile/09855969156780632315noreply@blogger.com2