Category Archives: Programming

Idea for dynamic #QRcode generator

I don’t know if maybe this exist yet. The idea is to have an application that displays an every changing QR code (like the program “watch”). For example this could be:

  • A clock. So the qr code contains the time with seconds
  • location information
  • temperature and other data

21:23

A simpler solution sure is to have an URI which has all the information. But wouldn’t it be nicer to have the information already without internet connection.

Other solution would be to create a bell or a key. So every time somebody scans the code it opens a code that only somebody that stands before the code/door can see. So something knows, that somebody is really standing in front of the door. So you know that you might want to open the door for them, either automatically or you go and open it for them because you get a message from an IM.

The simples form to experiment would be a program that takes any input and displays it immediately. That would not even need to know qr code. But an image viewer might be too slow. If this program already exist, please let me know. I am not a programmer, but would like to see it. ūüėČ

Leave a comment

Filed under Programming

GNOMEs Git

I never understood, why GNOME switched to yet another client/server based revision control (SVN) back then. But who was I to tell them that much more is possible. I aint the coder.

Anyway if I reas this from the new F-Spot:

This is the first release where we all used Git and it has massively paid off. Contributions are flowing in at a massive rate, from lots of people. See for yourself on Gitorious. Now that the release is out, it’s time to go over merge requests. It’s hard to keep up with them.

… I am happy that they did indeed switch to a distributed mechanism. Especially because it has always been hard to get any write access to some archives at GNOME. So it is going to be interesting what other effects this might have in the near future if people get comfortable with the new development style. Thumbs up!

Leave a comment

Filed under Free Software, GNOME, Linux, Programming

GNOME: Oh dear, here they come again

I was hopeful to see some major changes in GNOME Topaz (3.0). But now I have quickly viewed the discussions on the marketing list. Especially those threads:

  1. GNOME Marketing Strategy (Was Audiences), Paul Cutler
  2. GNOME 3.0 slogans, Michael Hasselmann

PLEASE DONT (do it again)

Dont waste your time thinking again about target audiences and slogans. Some say the real target audience are distributions. But distributions care for whatever their users care. You have users – a desktop is something universal.

It doesnt work that way, that a handful of coders and other geeks randomly gather virtually and every 6 months think about some new marketing stuff. The GNOME 3.0 goals were some good direction. The problem now is to get this ideas done. I dont think GNOME will be able to do it all in one step. The problems and solutions are on the table, already. There are GNOME users – you need to communicate with them. Dont let such stuff happen like Ubuntu did with the FUSA applet.

Those discussions are fruitless and have been talked about every once in a while. They never led to anywhere. This is partly because GNOME is controlled by the coders and some major corporations like Nokia, Canonical, Red Hat or Novell, who do finance some projects and pay some high profile developers. So those two groups do set the agenda. GNOME does not and will not have any marketing of its own. It does have some accidental marketing, but this is never tough through. Who should deliver such messages? If any slogan is chosen, you really think any distributor cares about slogans?

In the past marketing has been called propaganda. Its a matter to manipulate the human mind Рits neither a development model nor an organizational model.  GNOMEs organization is not very well stuffed. There is not a lot of money and there are only a handful of people doing stuff. he GNOME Board does not have a lot of power and the developers not necessrily listen to what they say. Even less is true for the GNOME marketing team. No developer really cares about what they think or write. Fact.

I think from the birth GNOME tries to be user focused, but it still struggles with this goal and if people draw a picture of GNOME the users still do not appear. Developers are important, but without the users, nobody would use the software. And GNOME is unlike OpenBSD not a project which is mainly used only by experts, but very broadly.  But this is not reflected in the development proces. I think the best thing would be to integrate the users more into development decisions. The answer is not to let some geeks define goals for the users. The users have to do that themselves. The only possibility to give feedback is the GNOME Bugzilla. But this is mostly also not very user friendly. There is no option for users to give simple feedback. A user submitting a bug report often gets a reply to try out alpha or beta versions of the software package. This cant be true! Those geeks are just way too far away of genera users to be able to feel what those might want ot need. Even worse: Why should any developer be motivated to solve any issue a user has?

Leave a comment

Filed under Free Software, GNOME, Linux, Programming

tag based working ? 1 ?

The more I do with TAGS in blogs and in microblogging and with bookmarks the more I think this could be a way to work with a lot more.

One problem we face when working with keywords is the selection of keywords. We have many different systems of keywords – like for photo management or music categorization. Also as I say “Categories” – categories are keywords/TAGS, arent they?

I know they are not REALLY. But I am reminded on the massive discussion about backlinks vs. categories in wikis.

A CATEGORY is something big …. TAGS are some small notes in one word attached to a thingy – an object. Maybe a photo, maybe an article.

Objects we do from programming. TAGS are or should be attributes to those objects.

NOW… I guess most TAGS should make sense for every object. or MOST.

TAGS are also a common knowledge. The act of tagging is important, too. On blogs like on wordpress we might get hints how to tag an article.

But how about a software that suggests to us a tag after we wrote an article? Like this one. I could assume that my article does contain some words more often. If we remove those words like “the” or does” or “wrote”we might get nowns primarily or also attributes or¬† adjectives.

Tags could be interrelated. This is why some web software offers popular tags. Many desktop applications still do not import the knowledge of the web.

Tagging means also that there is mostly more than one way to tag virtual objects. There always will be. Also every human will tell a story differently. The divergence is not a problem but is part of the ESSENCE of communication. And what are TAGS other than commnication.

If you write a blog you tell others what you think it is you are talking about. Others might think differently. Some Web2.0 software offers you the ability to add tags to foreign objects that are than publicly visible. Tags soemtimes reflect your own perception – but maybe more often you try to be smart and try to catch the peoples eyes by throwing TAGS at THEM to make you picture or article more visible. But you wont try to choose as many tags as you can imagine. This could be one method to gather attention but it would also be stupid. Like if you want to sell a product it is not recommended to promise EVERYTHING.

Google does not really honor keywords that much because it is a weak concept for public content. it becomes stronger when people vote on the popularity. But not always does a greater popularity of an article indicate that it fits one of the tags best!

So what do we need? I think maybe we would need some kind of TAG service that is spreading over different subjects and allows people to talk about tagging and follow different taggings strategies. Like Wikia Search did allow users to vote on what search results are nicer – but you cant just allow anybody to vote on anything if the end result should be smart.

Or better – its ok to allow anybody to do anything – but you then should also allow the user to select which choices she thinks are smart. Tagging should be like we talk to our friends and neighbours about where to buy or to let repair. We may come to the conclusion that some of our friends have a better idea where to buy good products than others. And we will turn to them more often when it comes to buying.

Same may be true for personal matters. The one who may know mich about buying might be dumb when it comes to personal relations – and somebody else may have a good advice for you … tagging solutions.

We now often see voting systems on Amazon and elsewhere – but they often are not very sufficient and also are corruptable.

I also think when it comes to work on the computer tagging could be cool, if the computer would have a fuzzy way to recall our choices.

I think about saving an image would not mean to select a file location – but rather to tag it – in the same manner we do it after uploading to a photo sharing site like Flickr. I think it many ways the idea of the online desktops is not that bad – but in another way. The interesting part should not be to integrate Flickr or Myspace better in our desktop – but to give the user more world knowledge in his day-to-day applications. We get some – like we can let last.fm player play our favourite music (which works more or less well) and also we might get some categorisation for our music albums that we import in our desktop via CDDB databases. But this is mostly just added feature or plugin and not something deeply integrated.

Or think about the work on the desktop -like I want to do some work with vector graphics and Inkscape editor (sort of Adobe Illustrator clone). I then want to learn about how I should act – but maybe also want to talk to people about how to use it.

Today people either use the provided help from installed manuals, or they search the web – or forums, wikis, … and maybe enter an IRC chat. And then they pose the same questions all over again – even if they have read the aqpplications FAQ. What might be interesting is if I enter the question I have inside the applications – maybe also by pointing to the section I am working on – and then I can get help documents as well als forums posts or the possibility to chat directly via instant messaging with other users who are currently working on inkscape and might be willing to help.

There used to be a service Qunu (which seems to be unaccessible fpr some months now?) which organized instant messaging interaction. You could define some tags where you think you were smart and people searching for that tag could find you and contact you directly. What about when I am using an application I could register and with this process tell my knowledge level – and then if other users work on a project with “my” application I can read their questions like in the groups of Laconica – and then even decide to interact directly and maybe not publicly. It would also be possible to not only interact by word but also by action. Some applications like Gobby, Inkscape or Abiword have been working on the ability to work on shared documents online.

And when you save – maybe you tag something as public. Epilicious is a delicio.us bookmark exchanger. A bookmark you wanted to share you tagged with “epilicious” also. And maybe just the public tagging will save the object online. Or like on upcoming.org – you can send an event to a group. People who follow that group get a notice about this new event.

I really think that this is the future computer interaction. Window managers like wmii allow you to tag an application to reside in one specific desktop window.

Essentially all computer work is about organizing. In some way if you print this is also some kind of organizing. Its an export. F-Spot uses “export” for photos who are uploaded to Flickr. In this case I think it is not good to name that an export – it is in the old sense – but in a new sense it is another saving location.

We will see a lot more virtualisation of webspace and stuff. Some people might even not use any local hard disk any more. But they still need a place where they save data like addresses.

This does not mean that we will not need locations. I think the Plan9 way was very good – to integrate all necessary location information in one file system. A completely different question is if we would need to show the location to the users or if that wouldnt rather confuse them?

So maybe lets create a tags based desktop?

Comments welcome.

Leave a comment

Filed under Free Software, Programming, Technology

Fedora Community

Max Spevack hold a talk about the Fedora Community on 2009 FOSDEM. Which I suggest you listen first before reading on:

Essentially I think Max grabbed the “Community” at the wrong handle. He elaborated a lot about how Red Hat and Fedora¬† work together and ow they enable people to build uppon the tools that Fedora has invented. Thats all very nice, especially for Red hat. In the last years Fedora often has stated that they do not interfere at all with Ubuntu. This always comes up when people compare the popular success of Fedora to Ubuntu.

Fedora is very developer centric. What Fedora is missing some warmth – some more “family” feeling. Do people feel comfortable? Fedora is also a big testbed for Red Hat – it can look what technologies work or are popular – and which are not. That makes Fedora often bleeding edge – more than a general user might often want. Also the support cycle is much shorter than on Ubuntu. So Fedora is not really a distro you would want to plant on your organisations desktops or servers. You will be forced to update quite often. Fedora moves fast. But thats getting offtopic from the community.

Fact is that trough the developer centricity leads to make the barrier for non-developers harder. One thing is what I already have pointed out in another post is the fact that even when editing the wiki you will have to sign some papers.

My view is that it is very important that the connection between general users and developers is open and flowing. Fedoras style is more a either you are a part of us or you are not.

On April 23rd I will organize my first Ubuntu  Release Party in my hometown. Why not for Fedora? Because essentially also on marketing Fedora INVENTED barriers and  created the Ambassador program, which I interpret as a means to professionalise the marketing efforts. And to make sure that people talk about the right things.

The problem here is that this turns of a lot of general users who are totally capable to talk about Fedora and show people how cool it is and what to do with it. Fedoras problem is that technically it is slightly ahead – but not years, but rather months – and that this alone does not attract people.

From all the talk I can not really see to what audience Fedora is talking. I would say Fedora is for people who want a fairly new Linux as a build platform and do live and like the Red Hat/ Fedora world. So you can use Fedora to develop an application that will work on future versions of Red Hat. Fedora also contributes a lot upstream and so allows work to be transfered outside Red Hat and Fedora.

So in the end that makes Fedora not very attractive neither for general users nor for company desktops – besides being the testbed for Red Hat. Fedora does not seem to have an autonomous agenda and depends highly on Red Hats decision. it does not make much sense for self-employed Linux folks to base their installments on Fedora nor does it make sense for the typical grandpa.

Some people at Fedora might agree and would define Community as this: Developer Community. The problem is that this also means that general users will not participate as whole heartedly as they do (for example at Ubuntu). And to make it clear: Thats a concious decision of Fedora – everything from development, contribution to marketing is organized in a hierarchical way that DOES allow everybody to start contributing but in fact turns a lot of people of.

In my hometown I have not met one guy who uses Fedora. Many early Linux users did use SuSE – and if they were dissatisfied they switched to Ubuntu – and then there is the Debian, Gentoo and FreeBSD crowd. This means nobody ever sees Fedora, this means nobody ever sees Red Hat. If this is a concious business model it is not working here.

What is Fedora missing? I think as a start it should be encouraged to talk about fedora even if you are not an official Fedora Ambassador. Give people something to work with, encourage them to make  Fedora their own. I also had the experience that nobody was willing to give a speak about Fedora at our local Linux conference Рactually nobody even answered my plea. But it should be the other way around. Fedora Ambassadors should go out actively and seek for the possibility to show Fedora. And here is also the problem Рif only Ambassadors do it, Fedora will be shown in fewer places.

So I think the whole Fedora eco system has a problem and thats why Ubuntu is so much ahead in popularity. And I dont believe you guys that you wouldnt love it if people would adopt  Fedora as much. Technically Fedora is much better than ubuntu, its the better product Рbut you very miuch have given up the popularity contest, which is sad. Even OpenSuse is doing more in this regard and it shows slowly.

I dont know who does the strategies at Fedora. And maybe you guys are satisfied with the status. But what I think is that in the longterm Fedora will be marginalized, especially when OpenSuse as another RPM based distribution is gaining more ground.

Thats it for now.

7 Comments

Filed under Free Software, Linux, Programming, Technology, Uncategorized

Another Online Desktop Idea

GNOME has its vision of what a online desktop should be. I have another. The idea is to find a replacement for:

  • VPN network access
  • XDMCP graphical logins
  • SSH logins
  • etc.

My idea – and I am sure I am not the only one having it is to have rather a local login to a desktop – but then be able to fetch some common settings from a central server – and maybe also some data.

I will explain a possible session:

First you have a plain desktop like a GNOME desktop. You might want to use the settings of your central account. Then you can do this by clicking on a link. You can type in your password and you will get all the settings you use. By that I mean things like IMAP account, bookmarks, Jabber account,… maybe also desktop settings loudness settings, messaging preferences – and maybe not some location specific settings like your proxy. Maybe you can register your location or rather choose it to by a dynamic location (because you use a public WLAN with some more secure settings and with different IP addresses).

On a second session  login  the settings will be downloaded and the environment of the desktop will change. Potentially this settings could be accessed from a central GCONFD which runs as root as a daemon instead of per session. Maybe this would also allow to tunnel some traffic through the server that has this GCONFD running.

So what this does NOT is:

  • It does not provide any secure connection like SSH or a real login to a server.
  • It does not provide a login to GDM through XDMCP
  • It does not provide any access to a VPN

It rather provides:

  • Information that a user has saved
  • Themes, Looks and other environment definitions
  • Maybe also acces to data if this is wanted. So if the user saves the data on a central server this desktop could offer some ways to access (via VPN, SSH, XDMCP,…). The ways that are offered could depend on the configuration of the GCONFD and on how the user defines access to his desktop.
  • It could also offer different VIEWS – so coming back to former ideas I offered here in my blog – So I as a user could define a simplified, lightweight profile for my notebook when I am on the move or for mobile devices. These views could also maybe be shared anonymized or personalized via Email, Jabber, etc. – so that they could be downloaded, installed, executed and used.
  • A way to print something from anywhere in the world to a printer of your choice.

For privacy concerns the user should be given some options to anonymize his shared views Рor be warned if the connection is not encrypted or secure enough. These views could maybe also include many different desktops in one Рso like you import a HOME view and a OFFICE view Рand can switch between them like today with the screens in the GNOME panel. So that would be useful not only while traveling but also handling different usages. Users need different environments. One main problem people have is that their computer tend to mix all kinds of usages Рso maybe somebody is working, has some private usages and also is active in an organization. Today people sort data and information by creating folders. But the number of folders is steadily growing Рand often you  only need one or two folders if you want to work on one subject. The other 500 folders are useless in this moment.

All those problems are neither targeted by todays desktop nor by GNOMEs online desktop vision which really just tries to integrate big websites into your desktop. I wish some of those visions could become true. Right now all desktops are much too conservative. I think maybe Plan 9 has done the groundwork for such an idea (representing all data in folders and files)

Leave a comment

Filed under GNOME, Programming, Technology

In need of a major new GNOME panel

I suggest that people start working on an alternative GNOME panel now. I have seen some suggestions on a GNOME wiki page, but I think most directions are very wrong. Like what you see here:

Essentially these are imitations of the fancy Mac panel. But I think that the Max panel does not give us anything cool as well as the things AWN an Kiba dock do. Look at this video: At one point it shows how to play volleyball with the icons. How stupid is that? I mean cool. Or better: I don’t care!

First of all I still do like the text menus, because you can access a lot of applications and settings without going through a lot of folders and sub folders. But I have some major problems with the panel:

  1. You can fix the position of a panel. but when I plugin in my digital projector the panel moves to the other display (on the right). How can this be called a fixed position?
  2. When the size of the panel changes the position of the fixed icons changes too. I have to resortmany icons after I have dettached my projector display. How fixed are thise positions, then?
  3. So it is impossible to configure one monitor display to show exactly the same things on each occasion. This comes from all the dynamic configuration. At least thats true for Ubuntu. Its like you always plug in a new display which you have never attached before and also like it would make any sense that the panel should never be on the main  display but always on the external display.
  4. You can also not configure to have a second panel which is bound to one display

These are only some of my new points. Here is what I desperately need:

  1. A panel which is much less customizable and dynamic. Because everything that can change results in random results or I have configure or reconfigure the panel. From my view the panel never moved to the point of the rest of GNOME. You can do nearly everything with the panel which does not make any sense.
  2. I suggest that new work goes to a new panel which can be a replacement of the old panel. Maybe one can reuse some of the old code but the essentials should be very different.
  3. I think one very important thing is that screen/display configuration and the panel should be one thing.
  4. Have the ability that the screens (1-4 or so) can be linked to specific displays, so lets say if I have two screens one is the major screen of my notebook (screen 1 on the left) – and the other screen has a different screen size (screen 2 on the right) and is configured for my projector display (which is a 16:9)
  5. If I attach a display and configure the contents, the panel, etc. these settings should be saved for this screen and display so that I get these back once I plugin in that display again. The content (desktop icons) of a display could also be available if this display is detached. Then screen 2 should be reconfigured to a single screen mode.
  6. Essentially if you want to give a presentation you will want perfect control of what the presentation screen looks like and what appears there. If you never know what happens a GNOME desktop can not be used for such a purpose. The frustrating thing is that things rather seem to get worse. I really think about switching Linux distribution because the dynamic screen configuration is really awful. I remember Fedora had a “system-config-display” which worked more relliable. I still dont know why this is not used upstream. Maybe some people think that this dynamic thing is actually good. Maybe it would be if it would work – but till then please keep this as an experimental feature in SVN and do not put it on Ubuntu LTS! grrr. sorry I had to go through a lot of troubles and still do because of this thing.
  7. I would dump all current panel applets because most of them are useless. Instead I would suggest to give a panel some functions like displaying time and weather. Or maybe for advanced users allow them to put a content on the panel which they can insert from script output. Like if I put on the hardware sensor monitor applet I get 10 or more icons on my panel and then have to find out which is the right important temperature. Instead of an applet a user should have a setting where he can enable the display of a temperature and hopefully GNOME can show the right one or give the user the opportunity to to enable the right sensor.
  8. Then there should be an area where the panel displays the icons of the most used user applications. Maybe allow the user to say which applications should never appear. But this would give the user a perfect access to the most used apps without forcing him to put them there. Why should he?
  9. As stated before I think it would be most intelligent if the panel itself is the interface to configure display. So when you add a new screen/display you can choose which panel you want (like no panel, copy major display panel, standard clean panel,…) And maybe have the ability to close/remove a screen with a closing the panel like you do it with tabs in browsers.
  10. I also think organizing screens and applications via the panel should be more intelligent. The tabbed window managers (wmii,dwm,…) invented the ability to group applications – so lets say you can configure a graphic screen and gimp, inkscape, blender,… all open on this one – or you have a mail screen where you work with email. Those screen layouts or definitions could be saved, so that you may have a more general notebook screen but if you go to work and attach your notebook to a large LCD display graphical applications will appear there. Today its rather primitive like you have screens 1-4 and have to move an application there manually on each occasion. Also handling screens should be easy like handling tabulators on a browser. Maybe in the future you may even be able to drag and drop a screen to a remote computer and then the other computer can work or see what you are working on or you can share a screen.

So I think most that is discussed so far on GNOME is nothing more that re-engineering of what Apple did and maybe spice it up a little. Only interesting page on the wiki that I saw was that about GroupBasedWindowManagement. I a pessimistic about GNOME or KDE being more creative in the future. Unfortunately the tabbed window managers still have problems with many applications and often still require some manual configuration. I really think maybe soem new project should try to do things better without repeating past mistakes. Like have less dependencies, so that operating systems like OpenBSD will also follow the development.

Leave a comment

Filed under Browser, Free Software, GNOME, Linux, OpenBSD, Programming, Technology

Will Jabber be the new HTTP?

Jabber is geting more and more attention these days – as most freemail providers provide it also as a chat protocol and integration into different web software advances I tend to think that it might become THE internet protocol. Why is that? Well it allows communication between desktop applications and server applications. It also allows communication between servers – and it allows also complex messaging – and its not full of spam. So I could think that we will eventually see some kinf of Jabber mail as the new mail standard in the future – which also allows attachments or voice – with still a simple kind of adressing. It will NOT replace http for sure – not as a replacement as a web browsing protocol – but maybe still for things like exchange of small personalized information bits. Jabber is extandable and is not a highly specialized protocol – so it can be used for many different purposes and it also is based uppon XML – which again makes it more flexible.

I think it is not really a better special protocol for things like serving web pages if you think like you used to think. But if you look at the problems of the web- like authentication and how people needed to set up solutions like OpenID to solve at least some of the problems – now lets think that a browser like Firefox can authenticate via Jabber – this would also be a unique and open identifier – only problem is that browser speak http and ftp – and generally talk to sites anonymously and unencrypted. Another intersting things are RSS feeds and calendars – right now we are used to fetch those meta data via http mostly -but this means we fetch anonymous data, unless we would integrate http authentication into http – but http authentication is not really comfortable.

So generally Jabber could act as THE authenticator protocol – but it could also be the protocol to get new meta data that we care about – either some client requests new meta data or he gets a message with the new data or changed data. I guess currently most RSS feed readers act the way that they repeatedly fetch one RSS file and then show the changes to the RSS feed aggregator? Also think aboout that Jabber also could fetch simple diffs of data – and as the client software might maintain the full info it could insert new data into the existing – this would also help mobile devices to fetch data in less time. And it would also mean that the user would get personalised data without that the user would have to log into web sites via a mobile device. He rather would subscribe to a site via jabber.

And there is another thing that is related and I think could become true: I think that the alway on metaphor will become less important in the near future. Why? Because it might often be better to be able to fetch large amounts of data via a wireless lan when you are near a wireless hub than to always be available and download data through 3G or any other “fast” new¬† network. I do not think the new phone standards will get anything near wired or WLAN standards when it comes to speed – both technologies move forward but phone standards will never be faster than WLAN standards. And on the other hand devices get faster processors and larger disks. So what I guess will happen is that your device will be able to contain something like a complete wikipedia – and that no one really would be so stupid to browse wikipedia only via a mobile device online – rather he browses it offline with no connection to the outside – or at least very limited connection – and the connections should be encrypted and indivualized – so that each device only gets what it is missing and only when it can fetch large amounts of data cheap. So instead of the computer that we got used to use more as a terminal to the real data on the internet the computer will more likely be used more directly – the mobile devices AND the desktops. And thats why I also think the Offline Desktop will rather become more important than less important. This is also a matter of security. What a user should want is that the computer reduces the need to connect to the internet and to visit random websites with unknown content or status. And it is generally a better idea to just have what you need in a controlled enviroment instead of having to import random data from endless sources. Funnily on desktops there is more computer power invested to search data that is stored randomly and also internet content from email, web or chat than to store data in a meaningful way. So I download a PDF in a random location (I have to make a choice) – but then I have to use a desktop search engine to find it again. Thats like I would put letters I get in random folders in my office and then I would have to search every folder every time. Ok computer power helps in finding such documents¬† – but wouldn’t it be better if the data you fetch is already organized and you would not depend on the logic of a search engine to find it again? Thing is that the category you think of with the document might not even be either in the name of the document or inside the document itself.

But again the free software desktops are much too conservative to think of such a solution. So ok Apple do this – and free software desktops will follow five years later. ūüė¶ Free software desktops rather think about more blingbling instead of helping the user and be ahead this time. I think this is also due to the fact that generally free desktops have no grand vision. Coders are more worried about deadlines ore fixing stuff – or doing something cool. What I just described would require people who want to do it and to put different resources together . I think it is not a huge task – in fact I think it could be done rather easily with some tweaks – like on GNOMEs epiphany on download neither download a document autoamtically nor ask for a location – but instead start an import wizard that suggest categories and allows you to add categories and texts. Then to retrieve this document you would do this by date, filetype, category or tag – you should also be able to gather different documents under one lable – so you could have graphics, PDFs and ODFs saved separately but retrieve them in one view with only a few words or clicks. The old folder content view is not able to help us any more with more and more data – but its wrong that instead of fixing the data storage metaphor – to create more and more apps that maintain their own databases of the files you have – so like you can have on GNOME beagle, tracker and f-spot all indexing your hard disk for three different databases – thats stupidity not intelligence. In open source we should have all the possibility to share technology intelligently and also if we develop – to also think about different apps. On GNOME at least my impression is that many projects go a path of their own because the core desktop was devalued intentionally and also the support for potentially core apps did not get any support from the core of GNOME developers.

I think a new  desktop vision should primarily focus on what people need to work with computers today Рand how the computer could help them in doing this easier. But lets forget for a while how computer work today. So maybe this would mean to write many parts from scratch Рlike to kill all file dialogues, because they are mostly unnecessary unless if you want to export data.

Leave a comment

Filed under Free Software, GNOME, Programming, Technology

Software philosophy

I recently stumbled upon a statement mentioning that OpenBSD is living a developers culture meaning that they fix things for themselves ratherthan for an abstract user. I had to think about this for a while. I think this is not a really seldom approach in software projects but in fact quite common. I think one CAN handle things that way given that the software is really rather used inside a group of developers. This philosophy doesnt work though if the group of users and developers are not homogeneous. Like in the GNOME project. I would guess that only a few users are also developers.

I think what is good in that idea of developing for developers is that this is kind of a pure, direct action. Meaning people act that are concerned with a matter act in the way they want things to be solved. There seems to be only one problem: Not everybody can or will be a developer, this is due to division of labor. Also not everybody developer will cook his food, build his furniture, etc. etc. . So a “healthy” mix would include those who can not develop themselves, but agree to the general philosophy of a software project and help where they can (bug reports, design, whatever).

Distributions contain the seed to do just that. Apache is one great example for a software project which is made from webmasters  for webmasters. There is a great power in this idea. Why? Because the coders would understand better what they need and therefore also those who have that same problem would benefit. Thats why I think specialised distros like for musicians make perfect sense. The good thing about it is that you can then forget about any artificial marketing, because this in itself is a perfect economic an ethical marketing tool.

Where it starts to get complicated is when people who are rather unrelated to some general ideas or the specific distribution are using the provided tools. Some developers expect that they in fact do have the same knowledge or are willing to code in the same extension that they do and they often only accept the position that people are on the way on doing so. This sure can activate some users who are able to to such things. And also I think the general view of the average user that software is there to do what SHE wants and that its just the “job” of the developers to fulfill the requests is plain wrong. My analysis is that these ideas of developers comes from a “poisoned” software market environment – The Microsofts, the Nvidias, the Apples, the AOLs and many seem to have successfully implanted these believes:

  • A user does not need to care about software. The ideal is “It just works”
  • Software is THE SOLUTION of your problems given or sold to you.
  • From the software is expected that it has the user in its focus. A software which does not (yet) do what could be expected is not worth it.

In the open source movement you rather find those believes:

  • If you really want things to happen, do them yourself
  • The developers decide what goes in and what the general direction is chosen
  • Often it is believed that a benevolent dictatorship is best for a software project (like with Python, OpenBSD, Linux kernel)

One can see that these are rather opposing views of how things should get done. The mediation often is done by companies who employ hackers to implement some things that their customers want. I know many developers live of these – but this really deforms the software environment. I would put the actions of Nokia as an example of a company which is able to pay developers to do what they want and also to gain influence in the direction of a software project like GNOME. What happens is the power is transmitted from the heads of developers to the heads of a company – in fact neither the users nor the developers can decide the directions then any more.

One could despair and ask: Who should decide? Whats the solution? I think the best situation for most people is if those who are involved in either the software coding or the usage of software are also those who decide on what is going to happen next. Developers could say they only code for their own likes – but this could also just mean that nobody would like to use their software besides themselves. But then again it often is in the very core interest of a developer to see his software used.

Leave a comment

Filed under Free Software, OpenBSD, Programming, Technology

Comparing TV sets and PCs!?

Whats the difference betweena tv set and a PCs? You know you can buy a tv and then maybe use it 6-7 years without being forced to do any updates or replace it, unless you want something better, bigger, whatever. I came to this question why tv sets are so different when it comes to rhythms. I have no insight in industry development cycles but here is what I assume:

  • A tv set may be developed in about one year from the first ideas to production?
  • A company might release new models every two months. As only new models get media attention?
  • Nonetheless an average customer may only buy a new set each 3 or 4 years.
  • A tv has limited functionality and maybe technology doesnt progress so fast any more

Now about PCs:

  • May be the development also needs a year or so?
  • release cycle is the same?
  • There is a difference and this is the operating system. This is not part of the production cycle. Operating systems are constantly developed and released, there are security fixes, small improvements, new functionalities and also major releases
  • The customer expects new hardware to ‘just work’ with his new PC.
  • The customer is forced to update the software for the system to be still secure enough.
  • Doing this fundamentally changes the computers basics.

One reason why PCs are so much more open for changes is that they are less mature and also are used in much broader ways. So like via the internet there come new technologies which did not exist two years ago. So if a PC would not change it would soon not be uptodate.

But I also like to question if this really makes that much sense? Why are PCs more open to attack? Frankly because theire inwards are often open to the world via the internet. But doesnt that mean that much of the operating system and software that is used just does things wrong?

I really think a PC should be secure by default without anybody needing to fix things. How could this be accomplished? Well first and foremost my simple reducing the size of the operating system, because every line of code you spare means statistically reducing the possibility of brokenness!

PCs need to be task oriented, which could  also mean that some things wont just work without patching or extending the system. But a restrain can benefit the functionality. The years of development of the PC have shown that it can do a lot of things now, already. I think maybe its time to shift the development and deployment strategy. Currently all OSes include auto- or semiautomatic updates via the internet. This doesnt always makes a computer  work better or more secure. So why not rather work on a secure and working basis and then make a thought trough strategic deployment?

Today there exist two kind of operating systems Рthe free once and the proprietary once.  The later are oriented in marketing Рso customer orientated but not necessary pro security Рand innovation is something for visibility or for customer lock in.

Free operating systems are developer orientated, or orientated to those who make deployments – if they mix with the developer group. So people who make deployments but cannot connect to the developers have a problem. There are distributors that try to moderate but at the same time make money. They hire developers and take money from customers. Other distributions like Debian dont actually have a distributor company. All in all its not the developers that directly connect to the users, although the potential is there, but mostly the geeky user fraction is connecting – so free operating systems become geeky.

All operating systems have a problem giving the customers or users what they want or need. Because they tend to oversee the real problems. They are more busy fixing stuff and getting the next release out than actually solving the problems of the customers. And the distributor companies are more busy in earning money , so they rather care that they earn some cash – and fixing problem is rather the point of promising, not fixing (which are two totally distinct actions).

I believe that a good marketing means to solve the problems of customers/users – so that is the real task, which I think proprietary operating systems currently often do better, while never satisfying. So they keep customers as happy and hungry as they need them. Quenching the hunger is something that is not on the target list of anybody, really.

Sometimes you find exceptional developers who indeed primarily think of the users problems, which is easier the more technical the stuff is or the more the user and developer base mix. The hardest thing is to deliver a product where developers are very distinct from the user base. Part of solution could be to indeed teach users to use the tools (operating system) the way it was meant to be used. So this again is an argument against the “just works”.

Some products dont need teaching. Like cars. People learn driving cars – and the manufacturers job is to just sell it. If somebody can not drive the manufacturer or car salesman wont teach him or her.

I believe that teaching has to be part of using an operating system, as you provide a product that is not as transparent as a car or a tv set. If you wont teach them, people wont use it. Especially if they are quiet familiar with one operating system and you like them to switch. Unless they are already lost and frustrated you wont get them to change.

But teaching is not enough. Another important thing is that you need to listen to the users! So the developers need to understand the feedback and act if necessary. This doesnt mean that you do what the users tell you. Somebody needs to understand the real needs. I have seen developers acting with the word of pragmatism as they were thinking that they will just do what people want. But this doesnt provide the vision. The developers (as a general term of people who workn on the distribution) need to understand the needs an plan on how they can satisfy the needs of the users. They also have to know that  a few loud users do not make a majority of the user base. There are no easy ways to get a operating system that is good. You will always loose some users, because you cant always make the right decisions for everybody.

Another interesting topic is on how to actually make decisions and move on. There is also no easy answer. The best thing one can do is to start with a vision and then work on getting there. So every decision that is made should be checked on what effects it will have. You need to comply on some common standards that make up some rules – and which means that people can rely on these points for at least as the vision that developers are working on, if you cant provide the solution now or soon.

One danger that I see many operating systems have a problem with is that they focus too much on the technical things. So they indeed think that if they talk about this, that it matters. But the thing is that the technical stuff is rather a stream of changes one needs to deal with. The decisions and selections are more important than the features. Features and news are willingly marketed and somehow developers are expecting the user base to be anxious to whats coming. Actually most wont be. The people often are more worried if their data is secure and overall if they can work as they are used to. I dont want to suggest that change is bad – I just like to say it doenst really matter that much – its not the number one issue for most users. People are happy if the computer and the software works better, als long as this doesnt mean that some core functionality is going away. Asit turns out often developers dont care that much about data losses as they are just too excited about the latest and greatest stuff.

So communication is important. Maintain a common understanding of the problems and of the decisions that are to be made. It should be transparent on how things are secided. And a users needs to be able to foresee the future. So he likes to get a released on a promised date – and he likes the operating to behave as expected. People accept changes if they are well though through and explained in detail. This is best accomplished with open discussions. Of course I am talking about free operating systems here, mainly.

How should an ideal operating system behave? It should keep what it promises. So  if you arent there yet dont make stable releases that are aint just it! its ok to make alphas and betas Рbutdont suggest to use them if it will let down your users.  Below the line you as a developer are responsible for the quality of the software you release. Always think that loosing data can have a fatal effect. Like:

  • Somebody looses data and then looses a big customer or….
  • looses his job…
  • an important action can not be executed – like the computer is used to organize food deliveries and with the data loss it means people could die because things take more time…
  • medical data gets lost and also somebody looses his live.
  • Maybe the software is used in a mission in space and the mission fails dew to software failure,

Developers tend to forget about that computer are used EVERYWHERE. Indeeda computer can mean the difference between live and death today. So an attitude of “please use our OS, but if it fails dont blame us” is not acceptable. Not always are consequences so fatal. But what do you want to suggest? Do you want people to use your OS really on every circumstance or do you just want it to be used by geeks who just want to play?

So if you suggest that your software “just works” you should make sure that it does. Strangely though my impression is that “just works” almost always means “always fails”, while more conservative software more oftenly “just works” without it being marketed in¬† that way (rather by the “security” feature).

I hope to have inspired at least some developers and makers to rethink the way they do software and operating systems and maybe give users some impression of what they should demand and expect.  Sure this is also true for those who select operating systems and deploy it. they should choose those operating systems that meet the criterias that are important for their users. You may think that what I say is trivial, but I think often OSes are chosen by what they know best themselves or what is latest, or where they get the most share of the end price. So sometimes some Linuxes from big companies may sell better because they are more expensive and so the merchant can easily add an amount of money  that fits the price of the end product. So he does not necessarily makes a quality choice.

To summarize – the choices are not easy – everybody can make a mistake – whats important is that you LEARN from your mistakes – so if some operating systems had led you down, leave them! Dont go back to them or to OSes and distributions which are similar to them. If you dont do this… you have been warned.¬† ūüėČ This is true for every part of what I just wrote.

And as the final closing: Still dont let others or yourself steal you the fun on working with computers. If you be careful in some points you can have fun and experimenting is fun, too – which is nice if you are not on a production system or you are a carefree individual.

Regards,

Vinci

Leave a comment

Filed under Free Software, Linux, Programming, Technology