Tag Archives: Wikipedia

Branched Wikis

The more I work with wikis I found that one important thing is missing. Much works goes into making Wikis look like they are none. While I find wiki development is not progressing. The most important development in the last years from my view is the invention of the common wiki language Creole.

What I am missing is the possibility to maintain branches for content. Just like version control software works – where you can check out some content and then pull in some changes. So for example you check out the content of Wikipedia.de. Then what you may want is that you can work on a local copy and then merge the content from the source. New articles would simply be created. Untouched articles could be simply deleted (you may want to select to prevent this from happening given that a lot of important good articles are often deleted). Maybe you also just select to import the content of some articles. Like maybe you have a website about composing music and you want to show some articles in that context rather than link to Wikipedia. But maybe you like to add some content to the articles or remove some sections. Now the source article gets updated. Today you will have to look at the changes on Wikipedia and edit your version by hand. Thats plain stupid given that software version controls like Mercurial already allows to maintain branches of content. So we have all software we need to merge the content either intelligently and automatically – or you get some notifications where your software or Wiki needs your interaction and attention. This feature maybe even could be extended to merge different articles with the same topic. Maybe we need better software algorithms to recognize similarities. It should maybe display two versions right and left and/or show you a mixed version of the two articles. And maybe it marks sections it thinks tell the same . Maybe like an article about a person there are two articles which both mention who the parents of this person are. The similarity can be used to ease the mixing of the content. Maybe one can develop new approaches if one adds the following principles: object orientation and enriched content. So currently Wikis contain a lot of free flow text. It is then segregated in sections sometimes without the software being able to identify a content.

Some people think it is not possible to markup all content. And I am also not sure if it really makes sense to display meta information in a page itself. Rather the meta information should be guessed and automatically added . So back to the article of a person. These articles all contain similar sections. Also one could identify some links rather as an object. If one sees text and content as object oriented it would be stupid to try to markup the content to indicate what it less. if you look at he example of the Semantic MediaWiki in Wikipedia:

… the population is [[Has population:=3,993,933]] …

one could also think: Why is the wording “the population is…” not enough indication of that a number is the population? Sure I know that computers do not recognize all content today. But I think if one would have a recognition engine which concentrates on similarities and is trained to identify some content I do not think this is all a big problem. Like all city articles in english Wikipedia are classified. So we can identify hat articles are about cities with no trouble and then you often find a table where population is indicated. So I think it would be nearly no effort to find out about the population.

If this does not work one could try to find this information itn the flow text. This could be made by a proximity detection between the word “population” and a number. If there are any doubts a human can still open an article and markup a text like you get a menu für city category article and have the task to mark a text section as containing the population and then save that information. The knowledge of that example then can be taken to find the population data in a new article. Maybe it would also be nice if those city article classes could be extended easily. In fact “population” itself is not really saying very much. Like it does not tell you when this population was counted. I could also imagine that Wikipedia articles could be written by robots. Like tell him to fill in the class information for a city article. It then could identifiy the information in the WWW just like it can do in the wiki itself. And then it could write an article with some given templates. Or one could implement a search inside a wiki where you formulate a question with elements you attach boolean to each other . Then you get search results about what pages in the WWW contain the information you are looking for. Then you can tell the engine which of the results contained that information and you may also be able to import content by marking text or clicking on an image, video or music file .

I haven’t seen much of these ideas mentioned anywhere and not implemented in any wiki I have seen. But if organizing information is the goal of wikis we sure need that next steps. If anybody can point me to implementations of any of those ideas I would be glad to get it!

Leave a comment

Filed under Technology

Why is Wikipedia dying?

Now that what could be foreseen of those who have got some knowledge about Wikis has become reality. User and content on Wikipedia is not growing as it has been in the past (see link). Why is this happening? First we must ask: Why did Wikipedia work? Because everybody could contribute and every contribution was mroe or less welcome.

What happened? The reductionists took over. People who are journalists or just people who like to control others – and they implemented a DEMOCRATIC system to control the content and the users. So it switched from anarchy to democracy. But Wikis live by anarchistic nature and nto by democracy. Democracy works by majority control and also by control of few about others. Anarchy on the other side likes to take people self responsibility and plays witht the rules that can change daily. But once rules are written in stone some people start to tell others what they may do and what not.

Actually this turns a lot of people of – like now in Wikipedia. One could see those problems already in Wikinews, which has become an irrelevant source for news because it was created already with many rules inherited from Wikipedia. In Google News there is a total count of 45 results of the word wikinews. Most of them are just mentioning Wikinews as a news source. While Indymedia has a count of 276. And this is also reflected by Google searches in Google Trend. One cause in Wikinews is that they try to have a neutral point of view and dont like original reporting. Neutrality is a nice thing but it does not fit into the wiki principle. They dont give articles the space to grow – so nobody cares. Indeed they seem to care more about the principle as about good news.

Leave a comment

Filed under Web

Inclusionism vs. .* ?

Inclusionism is a philosophy held by Wikipedians who favor keeping and amending problematic articles over deleting them.(read more at Mediawikis Meta Wiki) – thats just as a start for discussing some issues that Wikis andopen source projects do have in common. I think one could call me an inclusionist. But not in all aspects. I love the inclusion of everything so nothing will be lost.
I have just experienced the force of Wikipedia Deletionists in the german Wikipedia. I wrote an article about unconferences. That was deleted, but only weeks after that a similar article with the title BarCamp was written: The deletion was led by a highly respected Wikipedian who is a journalist . She search for the german word “Unkonferenz” and only got 64 results. So somehow this article got deleted, because she argumented it was not relevant. Indeed, if she had searches in english she would have gotten 1.2 million results. This is just one but a good example how “quality assurance” indeed leeds to less quality and redundant work. I would not say that wone should never delete a Wikipedia article but it should be the last choice for really stupid articles that do not make any sense at all. At least a REDIRECT should be possible.

Similar problems come up in open source. See the article “About leaving” from Russel Coker. Fedora decided not to support Xen for older CPUs like his (that he got from Red Hat as he left the company). There we see the problems with the classic WONTFIX approach. I found that distributions like FreeBSD have this WONTFIX attitude more often – and sure Fedora also does this more often than Debian. I think it is understandable if you have limited ressources but want to get a working release in time. I think the probklem is starting if people have less opportunities and are forced to switch (like from Fedora to Debian). There are other examples where people siwtch from Debian to Fedora for similar reasons. On my partI switched to Fedora because Debian never had uptodate software. Ubuntu really filled a gap here. Maybe I had switched to Ubuntu and not Fedora if it had existed at this point. I think the problem is that users often like to have a fork, something slightly different but that the efforts for switching or extending are often big. We still have different package formats. So as a user you often stuble accross a site where a developer builds only for his distribution (that might be Gentoo, Debian, or Fedora/Red Hat, etc.) and that just does not is what you are using right now. I think there is something wrkong in this development processes. People are starting to write for a distribution because they have limited ressources. At the same time that emans that many users will not be able to uses their packages and that some extra work will have to get in migrating the package.

I think the classic distribution development is outdated. Understandably but outdated. This approach only makes sense for tools that should only be run on one distribution and never on another platform. I am not a developer but I think it should be possible to automatically build for different distributions
while programming? Ok, sometimes you want to build on one library version that does not (yet) exist on other distributions, but this could be solveable. Maybe a tool could help you in deciding where to build on knowing what libraries all distributions use. Also I think development tools should enable users to write code that immediately is published online like with wikis or with Gobby. So certainly a devtool needs a jabber chat built in! So I am talking about live programming. And I am also astonished that translations of GNOME still happen via mailing lists and not while the code is written. This is ridiculous. This takes weeks instead of hours or minutes to fix some characters. The problem is that open source development is “traditionally progressive” – but in fact it often does not usenewest technologies to do the job better. hail to Launchpad that enabled very easy translating without people having to subscribe to mailing lists and so on. This social software stuff really is about enabling people to help each other more easily without much administrative hassle. But organisations like Fedora think that open source development needs strict organisation, while indeed it does not. Or better: development does not, creating a distribution DOES. Fedora really chose the opposit principles to Debian. While Debian chose to release when “it is ready”, Fedora chose to release on a regular basis. But a release often is nothing more as a working snapshot. I found the efforts of FedoraUnity interesting because they were able to build releases of their own without all the administrative overhead. I really think development and release building should be a complete seperate process. Distributions should not be proprietary. So it just does not makes much sense to waste time in trying to build a product as whole. I think distributions need to share much more ressources. I think distributions like Gentoo, Fedora and Debian should have a collective developer base. So that many packages get audited and worked on together and only after that the packaging happens in the manner that all developers try to automatically build packages for all possible distributions. And after that distributors fetch these packages and make installable ISOs out of it. in open source combining powers always will leed to better results. Do your forks like Inkscape did from SodiPodi but do not develop for a distribution. Development needs freedom. Freedom from policies like DFSG or Fedora meritocracy. Its just plain stupid to bind users and developers to a specific philosophy. Science and softwre development need as much freedom as can be. If the common basis would be bigger I could see much more choice for each distribution. The authors could enforce the use of specific licsenses if they want, but it should not be the distributions that bind the developers. And that you can see on many distributions nowadays. Take OpenBSD, take Debian, take OpenSUSE or FreeBSD. All do have their philosophy but mostly development and distribution comes in one. Let the developers discuss their philosophies and let the users decide what they want. The problem today is that you really would have to build Linux From Scratch and do all the work of you want to have your freedom. But doing all the work is also redundant and unneccessary!

Leave a comment

Filed under Uncategorized

Future of GNOME = future of free desktops !?

John Williams asks about the future of GNOME. I would agree with his analysis, though not with his consequences. I think the problem is the definition of GNOME itself. Historically it was important to build GNOME as an organisation as well as a desktop. Companies have since built uppon GNOME. The problem today is, that the software world is changing quickly. Since the desktop environments KDE and GNOME have hit the market they gained at least some popularity in the Linux community an beyond. But the demands of the users on free software is growing. We have solved many of the basic issues. Just to remind you we now have (more than one) free Office Suite with OpenOffice.org, we have competing free software browsers that have even got to be implemented in Safari of MacOS as the standard browser. Both major DEs are much more stable and powerfull today. The range of users, that are using Linux and free desktops has been growing massively. The future is bright, especially if you think about Asia, Africa ans South America. Linux will quickly spread everywhere, where money is essential. And if you think of special business software (where only one application will run on one computer): It will not matter what operating system it is installed on – so companies could use Qt odr Gtk as a basis and some kind of kiosk mode.
So what are we missing? I think the greatest lack we have is still collaboration. Bug comanies like Microsoft, Sun, Google, Oracle or Salesforce.com have a high organisational grade. They can easily change direction and it is more easy to organize ressources for a common goal. The freedom in free software allows us to change quick in many different small projects. There is no ONE Linux or ONE KDE direction. Many projects follow their own goals, What is essential and should be thriving is what we have in common.

But people tend to think in groups. This makes things easier if you want to make small decisions, but it also makes things complicated if the groups are not open to each other. And we still have that with still rivaling communities like those of KDE and GNOME. In 2005 the core goals are the same, as are to build a desktop based on free software. jeff Waugh declared an ambitious goal in the past, that was to have 10 percent of the desktop market fpr GNOME in 10 years. I am completely convinced that we will not even touch this goal without giving up the strict community thinking. We have a lot of great communities. Just think of: WINE , Apache, Drupal, WordPress, Wikipedia, Fedora, Ubuntu, GNOME and KDE, linuxprinting.org, GIMP, Inkscape, Mozillas and many many more. Many people in different project know each other and talk and cooperate, but still we do not work together on common goals.

Just one example: In 2005 it is still the situation that if you have GNOME and WINE installed you can install man applications. But there is still no easy way to open a virtual hard disk in GNOMEs Nautilus file manager (the same with konqueror). That means. People might even have a working combination of WINE, a windows application and GNOME – and they download the windows installer and install their application correctly. But the general user who does not know how to open invisible folders will not be able to open the virtual hard drive to work on the files. It seems that nobody yet has thought about that. the problem can be generalized by the word “integration”. You will only find those “bugs” if you think not as a developer only for your own project but begin to think from the users perspective. That’s why commercial projects like Linspire are more successful in targeting users. They are not really better technically, often worse, but they try to solve the users problems. Microsoft also has ever done this. We need to comine two things

  1. make progressive free software
  2. think from the users perspective

I think essentially free software projects have understand No. 1. They have invented this, nobody must tell them. But about No.2: There is a LOOOONNNG way to go. Free software is made by developers. There are marketing experts in the community but most of them sit in professional companies like Red Hat, Sun or Novell and they use the community for the goals of THEIR companies. We should stop to be thankful of what they give us! They have their business model, they feed many of our developers – this is a fair deal. but the free software and free desktop community itself should stand together. I see this as THE essential of free software: People that wanted (as developer and users) their freedom. The freedom will not come by itself.

We as the free software community must communicate to the users. We ARE the users – We ARE the developers – We have even other professionals like journalists, marketing experts (like John Williams) – we even have one millionaire (Mark Shuttleworth 😉 ). There does not have to be a fight between companies and the community, but it will be better for the companies to follow the community. The current situation where free desktops are still not getting real market share comes from too many companies like Sun that see themselves first, than the customers and on third place the community. I think the community for every free software company must be on the first place and should include their own customers.
And: Yes, i see projects like Wikipedia and even Creative Commons as part of our culture. Software is getting functions to put free licences in graphics. Free content and free software combined are much more powerful. They are even more powerful if we add free standards like from Xiph.org (OGG/Theora), Jpeg, PNG, XML to this soup.

3 Comments

Filed under Free Culture, Free Software

Common Documentation

There are some exciting projects running. One of this is the web-based system for translating open source software into any language: Rosetta – this is not the same, although similar to the project with the same name The Rosetta Project (which is is a global collaboration of language specialists and native speakers working to build a publicly accessible online archive of all documented human languages). Both do link to the Rosetta Stone as an inspiration.

Continue reading

Leave a comment

Filed under Free Culture, Web