As mixxt was suggested as an opportunity for the Webmontag to use it for social networking I looked at it. And I found that it does not even meet some minimum requirement. This one minimum requirement for a social network from my view is TLS support. You can not expect users to log in via an unencrypted connection, really? Go and do your homework!
Category Archives: Web
In a senseless effort Mozilla Corp. wants people to download Firefox 3 as much as possible. This does mean that many people might just download Firefox a lot of times, just to set a record. I think this is VERY STUPID. First of all it means you can forget about making anything of the download counts. If some people may download FF3 100 times you can not assume that it is really used that many times. IT just says NOTHING. Then it is produces a load in the internet and slows down some connections that are not that well equipped than others. And why all that? Just for an entry in the Guinness World Record Book? This is harmful for the net and contradicts every kind of open source advocacy so far.
To stop this kind of behaviour I therefore declare Firefox 3 as harmful and uncool and call evrybody to spread the word. Use alternatives like Epiphany or Konqueror or Opera but do not use FF3, deinstall it and tell them WHAT YOU THINK: worldrecord At mozilla.com .
Firefox btw. has many security issues. People download random plugins which interact unpredictably against each other and so Firefox can be coinsidered to be as insecure as the Internet Explorer or maybe worse, because you dont know who really wrote the code you just install.
Just got an idea of what makes upcoming applications important – its about “verification” and trust. Just like the nice video about the older term “trusted computing. Everything that is happening in the web are actions or interactions. Exchange of informations, tasks, savong of data, recalling of information. We need to trust our storage media or the information pathways. We also need to verify that our data is intact, so that we will not experience a data loss or that the data will be manipulated and become inconsistent.
This is not a mere technical issue but also political. We need to verify who we can trust, be it humans, companies or the government. Like with the new basic right of confidentiality and integrity of data, that was created in Germany by a high court.
I think that if we develop this basic right further it may be essential for every interaction in the formerly called called cyberspace – which is an extension of our ego and natural life.
Our situation is that we can not verify. But if we cant, we can not trust, which again means that we act and interact without a confident feeling. Which is like in a totalitarian state. Our privacy gets stolen and our own and personal integrity is hurt.
Its not that we are in danger, but that this situation is here for a long time. And humanity has experienced similar attacks long before the computer was invented. It is about control, about power. Those who control the pathways control the people and what they think.
What we need to accomplish is to regain control for each individual on every action and interaction. This would give the power back to where it belongs. Control of information outside an indivual about the same individual shouldnt be accepted at all.
People like to search – and often people like to search for specific stuff – like a wikipedia article, the Amazon product base, a movie in IMDB or in social bookmarking. I have thought about that for a while. Firefox lets you install search plugins to be able to select this more comfortable and Epiphany also allows you to define “intelligent bookmarks”. But is that all really intelligent?
- You should not need to install a random plugin on your system or browser. An installation is like an operation of a human – there is always a chance that something goes wrong or you get infected. Also what Firefox gives you is a selection of websites. The options to search are: ENDLESS. Which means that the search plugins menu could be endless and you could have a never infinite list of plugins.
- Defining intelligent bookmarks isnt always easy, especially when its not simply URL based but hidden in a search form.
How would a really intelligent search work?
I don’t know how you search but I often do something like this: I look for a technology or a product – lets say I search for an USB microphone – this means I need to know what makes up a good microphone – I need some customer opinions – and also a comparison of prices. In this search might be involved: tech sites, Wikipedia, review sites(like dooyoo or ciao.com), online shops,… . The problem with search with a general search engine is, that it doesnt understand my search. How could it? I think thats only possible with user collaboration and when the users give feedback about their search. The problem with that is that people are leaving a search site when they browse other content. What could we do? I think the only possibility is to integrate intelligence into the browser itself. I should be able to save “search paths” into my browser – maybe not bookmark a page but mark a sentense that gives me an answer and link that to the question. So you might start with just typing in a question in you local browser. This now uses a desktop search engine to look up if there is a similar question. So you might even get saved answers by typing in the question – like “How much euro is 1 us dollar” or you even could say “euro dollar”. Typing in “time” could give you the time. or when the desktop would not know what you mean or you explcicitly tell it to search online it could try to identify your search like:
- euro, dollar – both are currencies – so the user most likely wants to see their relation.
- usb microphone – the desktop could know or maybe lookup in some databases that this is some technical product – question would be if the user wants to understand how they work, want to get this working on his computer just buy such thing or get some recommendations.
- About recommendations: Users could interactively say what recommendations they like – or could trust some users (friends, colleagues) what recommendation sites might be helpful.
Maybe Wikia is now on the way of implementing this – but personally I strongly believe that the important part has to be the browser or desktop search engine. And then it can link to specialised searches like Technorati – but maybe rather fetching the content than opening a web site. I think opening a website should be the last thing to do. I dont think it makes much sense to load tons of websites on a local computer without any need of all that material – and also – why loading a web page, stripping out the adds as good as possible and then search for the real content. This is all because of too much crappy business models based on advertisements – while this all takes much more of our valuable time and makes getting the information we want or need much too hard.
In the last months and weeks we have seen increasing amount of announcements of Linux or Open Source projekts for mobile devices. Even GNOME announced such support. Now Nokia overtook Trolltech, the supporters of KDE. And now the scene looks different. It is true that mobile devices are interesting. But its also true that the whole technology industry tries to get a stake of this market. I havent counted all the projects that want to be a common basis for mobile Open Source (including somehow also Googles initiative) – but one thing should be clear: If you think you can get all the free coders by creating yet another mobile initiative you will likely not be successful. Its just plain stupid. All those companies that even decided to lay of smart guys like Dave Neary and think they will make profit at no cost for developers in the future.
Sure, basing on Open Source is a smart move – but sometimes I tend to like those companies more who are willing to pay their developers a fair amount to develop good apps instead of waiting fpr the community to fix and sell crappy devices with customers as beta testers. I think a good mix is possible: Pay Open Source developers and let other companies have somebenefit from your work. But somehow I also like that the increase of projects only leaves the one way open and which is that the mobile initiatives must agree on some more common base than just the Linux kernel. maybe this also means KDE and GNOME have to stick their heads together now that Nokia controls both of them (maybe thats too harsh but still funny ).
From you Open Source guys I would expect some similar smart move as the kernel developers did with virtualisation – dont support one single mobile initiative – you must keep independent and not make again such fatal choice as GNOME did to endorse Nokias initative so early. The real grassroots Open Source projects have to organize a more non-commercial base which leads to more freedom for the customers. We would not dont want to see developers and users locked in to a specific platform.
I like to see what Nokia will do and I really hope that neither KDE nor GNOME will be hurt by their power – if we are lucky this could lead to more cooperation between the free desktop projects and mayne Nokia is smart enough not to misuse their power but open up more. I wont bet that – the sure need some pressure from the base.And the base needs to organize themselves. It now seems to get more clear that GNOME and KDE not cooperating more made them more vulnerable and potentially also make them irrelevant if they just become part of one commercial initiative of many. And maybe, maybe it would be time to start of a new desktop initiative from scratch who can again collect attention and support from many projects without being attached too much to only one specific company?
Now that what could be foreseen of those who have got some knowledge about Wikis has become reality. User and content on Wikipedia is not growing as it has been in the past (see link). Why is this happening? First we must ask: Why did Wikipedia work? Because everybody could contribute and every contribution was mroe or less welcome.
What happened? The reductionists took over. People who are journalists or just people who like to control others – and they implemented a DEMOCRATIC system to control the content and the users. So it switched from anarchy to democracy. But Wikis live by anarchistic nature and nto by democracy. Democracy works by majority control and also by control of few about others. Anarchy on the other side likes to take people self responsibility and plays witht the rules that can change daily. But once rules are written in stone some people start to tell others what they may do and what not.
Actually this turns a lot of people of – like now in Wikipedia. One could see those problems already in Wikinews, which has become an irrelevant source for news because it was created already with many rules inherited from Wikipedia. In Google News there is a total count of 45 results of the word wikinews. Most of them are just mentioning Wikinews as a news source. While Indymedia has a count of 276. And this is also reflected by Google searches in Google Trend. One cause in Wikinews is that they try to have a neutral point of view and dont like original reporting. Neutrality is a nice thing but it does not fit into the wiki principle. They dont give articles the space to grow – so nobody cares. Indeed they seem to care more about the principle as about good news.
Jeff Waugh recently announce that GNOME.org now use WordPress MU for blogs. What looks like a smart move is indeed not very smart. Why? It was decided to use a new CMS for gnome.org – if Drupal would have chosen it would have been a matter of minutes to create a subdomain will blog functionality. And there still is the GNOME planet. GNOME has not solved the main problem the site has for years now, which is:
- Content is outdated.
- Many new are not present on the main page
People are working for years now on this front page. The way one should have done it from my perspective is:
- Use Drupal as a good base
- Start with the main page www.gnome.org and link to old content.
- Replace content to be part of Drupal slowly one by one.
- Focus on new stuff like news – integrate Planet and Blogs – so make a selection of most interesting blog posts – seperate content into user, developer and general public.
- Enable projects to make their own pages (replace /projects)
- Focus on user logins. So if you log in – you can comment in blogs, participate in projects, access hidden parts and get more rights to add content.
With a given layout and some work one would have been able to replace the main page in less than a week. One could have started in trying to move that page 1:1 into a CMS (an so have a theme which represents the current status).
It is unfortunate that the switch to a new CMS has taken so long. My help for guadec.org was not accepted (but it looks good) and also shows Drupals great flexibility. In may not be the greatest CMS – where Drupal is to get things done – and that I think is most important. I am tired of outdated sites or sites that never get done.
In the WGO (www.gnome.org) decision cycle one of the errors that have been made is that everybody could add some requirements – and that then after that a specification was set up. Drupal was dismissed by Quim Gil because of the following reasons:
- Localization: although the i18n module has done a lot of progress, it’s still miles away from our requirements. If wgo would be just in one language I would probably bet on Drupal (as I have done consistently in the past). But thinking in a scenario of wgo translated into +20 languages in one year I see a lot of risk around Drupal. Either we would need to develop a lot of hacks and additional features or either we
should design a fragile workflow
- not convincing the i18n team and/or
- with a high probability of ending up in a low quality mess and a lot of improductive work. Drupal is conscious of its weakness in the multilingual field and they are putting slowly a remedy (see the recent thread at http://drupal.org/node/88417 ) but this process will take long and we need a multilingual wgo before.
- Look&Feel. We would get what we want.
- Learning curve. It’s easy to get from 0 to an editor level. It’s not difficult to get from intermediate admin/hacker level to advanced. It would be appropriated to our needs, I think.
- Security and upgrades. Vulnerabilities are quickly found and generally quickly fixes with maintenance releases. The upgrades are made with scripts and currently it’s a quite straightforward process. There are not big problems with the core CMS, but the maintenance of the contributed modules is another story (i.e. the diff module was abandoned by the maintainers and now we would need to port it ourselves in order to use it in the last versions, these kind of stories are not that rare).
- User management. Permission levels are more fine grained than before and there is an LDAP feature. It would do the work although I believe other candidates are much more stronger in this field. But well, we don’t expect to have complex permissions policies in wgo either. It would probably do the job.
- Contributors around. Definitely the best asset. Over the past months many Drupal fans got interested in GNOME, and/or the other way round. Many GNOME related sites are using Drupal and there is some expertise around. However, we have also experienced difficulties getting real commitment in our Drupal sites when we have needed them and I have to say that the level of complexity of these websites is not extremely high either. These are two reasons that make me think that although we might have many potential volunteers around, at the end I’m not sure if we would find the resources needed to hack a Drupal installation to the levels we want to achieve.
Here are the reasons to choose Plone in the end.
if one looks back i think the most underestimated thing is the learning curve and Quim did not see that from my perspective Plone sure has the highest learning curve of all CMSes that I have looked at. And I know many projects who say the same. And as we did not have a new CMS on 2.18.0 , 2.18.1 and still not at 2.18.2 one can and should say that people where not able to move quickly enough to a new CMS with Plone. So to choose Plone was clearly the wrong decision as the goal to have a new CMS ready not later than 2.18.0 release. may experience is that Drupal does not do everything perfectly but it provides all important tools and one can quickly start to put content in. Abut the lack of really good localisation – if I look at the situation right now – we still do not have localized pages – and are far from new pages. And Drupal also moves forward and gets better – so i guess at the point that localisation would have been a real issue Drupal would have supported it better – and also one could have won some people working on it to help GNOME.org. Its more easy to fix one module as to fix a whole CMS concept (Plone will never be easy to use!). Its sad to say that GNOME.org is stuck now and to see that exactly happened what i feared – and even worse.
Personally i would welcome to see more self criticism in that sense. its not a bad thing – everybody makes errors – but if people ignore their own errors they will repeat them. I fear most of the people who made this decision will not admit that this was not the right decision. Actually it turned away some people from participating.
I am very much a friend of software like The Rosetta Translation Portal. Still many translation happen with email communication which takes long – although maybe in the communication process people can learn more. But thats not my point. I think, I wish that more software would try to
combine common results. As: Common translations – so if somebody translates a menu entry in KDE or GNOME you have the option to just use this in Firefox or the other way round. So immediate worldwide collaboration. Chatting should be integrated (take Jabber!) The same could be true for bugs. There are bugs that have the same roots in many distributions and also in GNOME or KDEs bugzillas. But still: No direct data exchange is there. Why should a user have different bugzilla accounts? Couldn’t we have the one developing ID (with openid?).
So this is something I would like to describe as Web 3.0 – its not that I liked the Web2.0 hype – but as we have this term I like to tell with the term Web 3.0 that there is much more than just nice graphics and AJAX. And if you see the web as a real WEB it is sad to see that not much of interwebed websites do exist. Web 2.0 has some nice features – but it does not solve collaboration problems. Most Web2.0 is about interactivity – but what we really need is automated collboration. And again I think that the old principles of Unix could be an example of how things could work.
I now just talk about that in the light of help for free software projects, because I see how much time is wasted in doing things that sure have been done and said many times before. And that is because web solutions are seperated. Web2.0 tried to solve these things by making web sites stronger and more easyto use. This is good – and we should not walk back – but I feel that their is a huge overweight in making better usability while less work was spent into making work redundant.
So we now have dozens of independent social bookmark systems or wiki solutions. But they are not talking to each other. So we have to have hundreds of accounts – and if we switch a provider – we have to start from scratch. if we are lucky one provider was able to write an importer for the data of the old provider – but this is seldom and mostly will only be done if the data is in a market dominant tool. This is what Salesforce.com did for Excel spreadsheets. Even on the desktop – OpenOffice.org did not care about Gnumeric spread sheet standard – both prefer to use Excel as exchange. So that one could wonder what FLOSS applications would do without Microsoft?
If one watches how things develop in the web sphere one can now see that the web in general is lacking one feature that right now is most exciting in the software world. This is actually woking distribution. The web today has some major problems
- Most of the content is not free, so distribution is not free
- The upcoming of DRM makes things just more complicated
- There are no good ways to distribute content
- There is no versioning content accross websites.
On the other hand the web faces major tasks:
- A growing number of users makes it more and more important that every website is able to cope with a slashdot effect.
- Content gets more and more mixed up as it is a promising possibility to reduce costs and also promises to be more flexible in the future.
There are some software projects like Mercurial and rPath that build heavily on a distributed system. This enables a maximum of flexibility ans also a very easy way to integrate content and patches.
But as far as I can see the mist you can expect from a web server is that it does versioning itself – or that you may be able to build a network of webservers that you own to distribute requests. But these are all solutions of control. at best that works with Web 2.0. Right now we have big companies like Google, Amazon and Ebay that own content and technology – and we are getting more and more dependent on these companies. For new companies it would be more and more important to be able to give users the same ease of integration and a trust level without these dependencies on very few companies. The users also would be happy if they had more choices. I hope that we will see a major revolution of how the web is built and uses, soon. If not I fear if one or two of those big names do something wrong the effects will be disastrous (millions of shop owners will loose their ecomic base and major functions that everybody is relying on will not work).
The truth is that the greates danger is always there of only few have control over many. The web was not ment to be dependent on only few. What we have now is the result of o nly few companies being smart enough to deliver (web2.0) to the users. Our task now is to build the next generation web that gives us all more freedom and independence.