Jabber is geting more and more attention these days – as most freemail providers provide it also as a chat protocol and integration into different web software advances I tend to think that it might become THE internet protocol. Why is that? Well it allows communication between desktop applications and server applications. It also allows communication between servers – and it allows also complex messaging – and its not full of spam. So I could think that we will eventually see some kinf of Jabber mail as the new mail standard in the future – which also allows attachments or voice – with still a simple kind of adressing. It will NOT replace http for sure – not as a replacement as a web browsing protocol – but maybe still for things like exchange of small personalized information bits. Jabber is extandable and is not a highly specialized protocol – so it can be used for many different purposes and it also is based uppon XML – which again makes it more flexible.
I think it is not really a better special protocol for things like serving web pages if you think like you used to think. But if you look at the problems of the web- like authentication and how people needed to set up solutions like OpenID to solve at least some of the problems – now lets think that a browser like Firefox can authenticate via Jabber – this would also be a unique and open identifier – only problem is that browser speak http and ftp – and generally talk to sites anonymously and unencrypted. Another intersting things are RSS feeds and calendars – right now we are used to fetch those meta data via http mostly -but this means we fetch anonymous data, unless we would integrate http authentication into http – but http authentication is not really comfortable.
So generally Jabber could act as THE authenticator protocol – but it could also be the protocol to get new meta data that we care about – either some client requests new meta data or he gets a message with the new data or changed data. I guess currently most RSS feed readers act the way that they repeatedly fetch one RSS file and then show the changes to the RSS feed aggregator? Also think aboout that Jabber also could fetch simple diffs of data – and as the client software might maintain the full info it could insert new data into the existing – this would also help mobile devices to fetch data in less time. And it would also mean that the user would get personalised data without that the user would have to log into web sites via a mobile device. He rather would subscribe to a site via jabber.
And there is another thing that is related and I think could become true: I think that the alway on metaphor will become less important in the near future. Why? Because it might often be better to be able to fetch large amounts of data via a wireless lan when you are near a wireless hub than to always be available and download data through 3G or any other “fast” new network. I do not think the new phone standards will get anything near wired or WLAN standards when it comes to speed – both technologies move forward but phone standards will never be faster than WLAN standards. And on the other hand devices get faster processors and larger disks. So what I guess will happen is that your device will be able to contain something like a complete wikipedia – and that no one really would be so stupid to browse wikipedia only via a mobile device online – rather he browses it offline with no connection to the outside – or at least very limited connection – and the connections should be encrypted and indivualized – so that each device only gets what it is missing and only when it can fetch large amounts of data cheap. So instead of the computer that we got used to use more as a terminal to the real data on the internet the computer will more likely be used more directly – the mobile devices AND the desktops. And thats why I also think the Offline Desktop will rather become more important than less important. This is also a matter of security. What a user should want is that the computer reduces the need to connect to the internet and to visit random websites with unknown content or status. And it is generally a better idea to just have what you need in a controlled enviroment instead of having to import random data from endless sources. Funnily on desktops there is more computer power invested to search data that is stored randomly and also internet content from email, web or chat than to store data in a meaningful way. So I download a PDF in a random location (I have to make a choice) – but then I have to use a desktop search engine to find it again. Thats like I would put letters I get in random folders in my office and then I would have to search every folder every time. Ok computer power helps in finding such documents – but wouldn’t it be better if the data you fetch is already organized and you would not depend on the logic of a search engine to find it again? Thing is that the category you think of with the document might not even be either in the name of the document or inside the document itself.
But again the free software desktops are much too conservative to think of such a solution. So ok Apple do this – and free software desktops will follow five years later. Free software desktops rather think about more blingbling instead of helping the user and be ahead this time. I think this is also due to the fact that generally free desktops have no grand vision. Coders are more worried about deadlines ore fixing stuff – or doing something cool. What I just described would require people who want to do it and to put different resources together . I think it is not a huge task – in fact I think it could be done rather easily with some tweaks – like on GNOMEs epiphany on download neither download a document autoamtically nor ask for a location – but instead start an import wizard that suggest categories and allows you to add categories and texts. Then to retrieve this document you would do this by date, filetype, category or tag – you should also be able to gather different documents under one lable – so you could have graphics, PDFs and ODFs saved separately but retrieve them in one view with only a few words or clicks. The old folder content view is not able to help us any more with more and more data – but its wrong that instead of fixing the data storage metaphor – to create more and more apps that maintain their own databases of the files you have – so like you can have on GNOME beagle, tracker and f-spot all indexing your hard disk for three different databases – thats stupidity not intelligence. In open source we should have all the possibility to share technology intelligently and also if we develop – to also think about different apps. On GNOME at least my impression is that many projects go a path of their own because the core desktop was devalued intentionally and also the support for potentially core apps did not get any support from the core of GNOME developers.
I think a new desktop vision should primarily focus on what people need to work with computers today – and how the computer could help them in doing this easier. But lets forget for a while how computer work today. So maybe this would mean to write many parts from scratch – like to kill all file dialogues, because they are mostly unnecessary unless if you want to export data.
Just got an idea of what makes upcoming applications important – its about “verification” and trust. Just like the nice video about the older term “trusted computing. Everything that is happening in the web are actions or interactions. Exchange of informations, tasks, savong of data, recalling of information. We need to trust our storage media or the information pathways. We also need to verify that our data is intact, so that we will not experience a data loss or that the data will be manipulated and become inconsistent.
This is not a mere technical issue but also political. We need to verify who we can trust, be it humans, companies or the government. Like with the new basic right of confidentiality and integrity of data, that was created in Germany by a high court.
I think that if we develop this basic right further it may be essential for every interaction in the formerly called called cyberspace – which is an extension of our ego and natural life.
Our situation is that we can not verify. But if we cant, we can not trust, which again means that we act and interact without a confident feeling. Which is like in a totalitarian state. Our privacy gets stolen and our own and personal integrity is hurt.
Its not that we are in danger, but that this situation is here for a long time. And humanity has experienced similar attacks long before the computer was invented. It is about control, about power. Those who control the pathways control the people and what they think.
What we need to accomplish is to regain control for each individual on every action and interaction. This would give the power back to where it belongs. Control of information outside an indivual about the same individual shouldnt be accepted at all.
Filed under Technology, Web
I still haven tried other BSDs. Why you may ask did I start with OpenBSD? It all started with the problem that I had an old Fedora Core 4 which was really unstable (kernel panics). I then installed OpenBSD back in July 2007 . The interesting result was: Although I did not know much about OpenBSD I got it working AND it just ran. It felt like every day it looked the same. One of the core principles of OpenBSD is to install fewer packages and to have these secure. Fedora OTOH installs quite a lot and rather suggests graphical administration (classical Red Hat style). Also Fedora suggest to never upgrade a release but to always reinstall from scratch. For my usage this just wasnt useful. The only way would have been to make a partition for the data (should be Samba server) and then always erase all system settings. But still – system settings are essential. So from my perspective an OS that suggest to always reinstall from scratch can not be taken seriously. My alternatives where either other BSDs or Debian. Why not Debian? Because Debian is always outdated. So I would have to live with outdated software for many years. As even software like Samba is evolving quickly I dont want to miss additional features. Why not other BSDs? 1.) OpenBSD has a good reputation in security 2.) FreeBSD is the BSD that is most similar to Linux – and also has some hype – but as I want something different I dont want a big thing but something small 3.) OpenBSD is rather aggressive when it comes to demand open source driver support from hardware vendors. Although Theo de Raadt seems to be somebody who know how to make himself enemies I like that he speaks up and has an opinion. I am a Linux guy, I like/prefer the GPL but I respect the work of OpenBSD and thing they did some great work. In September I had the possibility to chat a bit (just too short) with a OpenBSD guy (Bernd Ahlers) on our Linux day in Kiel which i helped to organize. In fact OpenBSD said they come to Kiel in the same week as I was considering it. So this was one of the reason I said – ok if they come I try it and maybe have the chance to ask some questions.
I did ont have the chance to discuss it really because I was too involved in helping with the event. And also I did not have much questions as OpenBSD just did what I expected.
Tomorrow I am going to install OpenBSD first time for a customer where it also shall replace a Fedora system. Same background: Hard disk is full – but should I install Fedora again? If you go to a Fedora channel and ask for FC 4 or 5 people just laugh at you. So on systems that should run for longer time without often upgrades its much nicer to have a system which you can fix with help of the distribution. I dont particular like the source compiling. But OpenBSD gives me ports that it has copied from FreeBSD.
I have also made some experiences with Gentoo, but my impression is that its not really taking care about packages. The worst thing they did was to mask fastcgi and suggest fcgi. For somebody who is only a part time Gentoo user this resulted in some time offline. I really expected that fcgi would work just the same with my setup (small fastcgi processes of my moin wiki where the Apache connects to), but fcgi does not support that or at least must have a totally different syntax. I dont accept any drastical changes from one day to the other that require me to make decisions or to learn how not to do what Gentoo suggests – I rather like to trust a distribution to know better then me that something is better AND compatible. So Gentoo for a server was no go for me, too.
So far I could deal Ok with OpenBSD. I have asked some questions on #openbsd and so far found them always helpful. They are not guys who will always answer your question liek you expect – rather they sometimes tell you that you dont want to do something if you aksing such stupid questions. But this is ok. Because then I dont end up with a system state that I cant handle. If you are reading this because you are thinking about if you want to use OpenBSD I would say: If you come from Windows OpenBSD might be too different maybe – but if you are an administrator who is willing to learn and look for a system that is easy to maintain (easy not in the sense of comfortable but in the sense of: you can maintain what you want without a lot of compromises) then OpenBSD might be for you. If you come from Linux I suggest you try to forget most that you learned. Althought the principles are the same OpenBSD doesnt use systems like the System V init scripts. After you have installed a server package you will have to add start commands to /etc/rc.local. Thats not that hard. Mostly OpenBSD packages tell you what you can use after installation or you find that in /usr/local/share/doc/*. If you know about scripting this isnt a problem anyway. So startup process in OpenBSD tned to be more simple. As System V scripts tend to be rather complex.
Another priority is to have 100% of the system documented with manuals (man command). This is very nice if you dont have an internet connection – and then OpenBSD offers you all information you need to fix a problem. Many Linux distributions dont have that. Debian has extensive docs – but more often in /usr/share/doc/* – and also Debian packages often are very different – so lets say the postfix maintainer and the exim maintainer (both MTAs) do make very distinct packages and ask different questions. On OpenBSD its rather that all packages are installed in more or less the same way BUT you can expect that every command has a manual.
For desktop systems that is not always needed also because many graphical apps cant be documented fully with text alone. But still I have found this “feature” EXTREMELY helpful for administring. And I know if I am alone at my customers and dont have the possibility to make extensive internet researches I will be able to find all I might want to know inside the system I am just working. Fedora for instance is rather bad in that respect.
So to summarize OpenBSD looks very clean. I have encountered some problems with the ports. Some seem to have a dependency problem (like Moin and Python). Not sure why that is the case. Either I did something wrong or OpenBSD needs to work on that part. OpenBSD is not something that “just works” – so if I need to tweak this and that its ok as long as the things I depend on (the core OS) works as expected. I was able to upgrade my newest install for4.2 to current. I needed to compile “userland” and a new kernel which took about 2 1/2 hours in a 2 Ghz system. Well in fact I think should have only needed to update 3 packages but that I did find out too late and also I was interested in how long all this would take.
I was happy to see that all I learned on Linux was not for nothig but in fact I was able to do some things different because I knew what the documentation tried to suggest and how to make it quicker or better. In some situations I still rather like to follow the docs word by word sometimes.
I expect this new installation to be much more stable and clean than the Fedora system. We often had some problems with Samba. On my own trials with OpenBSD my samba access was 4×5 times more stable and felt more solid. I dont have any stats but thats what I have experienced.
What next? Next I like to try out Minix and NetBSD. Minix because it should boot a lot faster. I will watch how the package progress is going. For now I stick with Foresight also because its package manager is really cool. With Conary package manager you can go to an unstable system and then back. Dont try that at home with any other Unix/Linux besides rPath! NetBSD I like to try to see whereit differs from OpenBSD and how it “feels”.
People who read this blog know I somehow like OpenBSD for some reasons. One passage in the FAQ I just read strike me as its what I also think:
In fact, as our hope is to continually improve OpenBSD, the goal is that -current should be more reliable, more secure, and of course, have greater features than -stable. Put bluntly, the “best” version of OpenBSD is -current.
Thats what I always though where Debian sucks. They have software which is many years old in the stable branch. They try to fix some security issues with bug fixes – but fact is that many early versions of software are broken by design and that very often newer software is better. Its not always a real security leak – sometimes early release dont require or provide some level of security – so to think that old software which has no reported bugs or leaks is better than a new one is just false and also dangerous. Old software enables hackers to work for years to discover vulnerabilities which is much harder on moving targets.
And about security in general: Statistics sometimes can help – but in the end all must come down to very practical issues. Like – some people think its necessary to run a full scale firewall on every webserver. This might make sense on some installations – but often this is overkill. And some measures like prohibiting password access via SSH is much more important than to block all but a few tcp/udp ports. Security is a very relative term. On one hand you can make your systems infinite insecure, even with the most secure OS – and OTOH you can invest endless time to make your system still more secure. I would vore for “practical security” – which means that your system should be bit more secure than you actually need. And it also should depend on how much money you got. So security is not only about how much YOU should do to your system, but also – if you have a cash cow web server – please pay some good people to take care that its secure. Think about how bad it is if you loose data. If its not bad at all you dont need to do much – mostly you want to make sure that you mails servers are not abused by spammers and its not easy to access your system. So please beware of this situation:
- All users (also mail users) are system users (this alone is not fatal, but…)
- They can change their own passwords and…
- They have SSH access
This would mean that simple user names like “john” could give access with password “1234″ . And then some very simple SSH hacking is on your box. And then you better have a really secure system, because if this happens a hacker has all possibilities to work on the vulnerabilities. This may sound silly for some people, but I think that those things are propably more widespread and more problematic as if your InmageMagick is slightly outdated. Not to suggest you shouldnt update ImageMagick but some scenarios are more likely than others and should be looked at more closely.