Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.

25 March, 2015

Michal Čihař: Enca 1.16

23:30 UTC


As a first tiny project in this HackWeek, Enca 1.16 has been just released. It mostly brings small code cleanups and missing aliases for languages, but fixes also some minor bugs found by Coverity Scan.

If you don't know Enca, it is an Extremely Naive Charset Analyser. It detects character set and encoding of text files and can also convert them to other encodings using either a built-in converter or external libraries and tools like libiconv, librecode, or cstocs.

Full list of changes for 1.16 release:

  • Fixed typo in Belarusian language name
  • Added aliases for Chinese and Yugoslavian languages

Still enca is in maintenance mode only and I have no intentions to write new features. However there is no limitation to other contributors :-).

You can download from http://cihar.com/software/enca/.

Filed under: Enca English SUSE | 0 comments | Flattr this!


Today we announced a collaboration between IceWarp and Collabora to start the creation of LibreOffice On-Line, a scalable, cloud-hostable, full featured version of LibreOffice. My hope is that this has a huge and positive impact for the Free Software community, the business ecosystem, personal privacy, and more. Indeed, this is really one of the last big missing pieces that needs solving (alongside the Android version which is well underway). But wait - this post is supposed to be technical; lets get back to the code.

A prototype - with promise

At the beginning of the LibreOffice project, I created (for our first Paris Conference) a prototype of LibreOffice On-Line using Alex Laarson's (awesome) GTK+ Broadway - you can still see videos of that around the place. Great as the Broadway approach is (it provides essentially a simple Virtual Desktop model into your browser), the prototype taught us several important things which we plan to get right in LibreOffice On-Line:

  • Performance - the Broadway model has the advantage of presenting the full application UI, however every time we want to do anything in the document - such as selecting, panning, or even blinking the cursor; we had to send new image fragments from the server: not ideal.
  • Memory consumption / Scalability - another side effect of this is that, no matter how un-responsive the user is (how many tabs are you long-term-not-looking-at in your browser right now) it was necessary to have a full LibreOffice process running to be responsive & store the document. That memory consumption naturally significantly limits the ability to handle many concurrent clients.
  • Scripting / web-like UI - it would have been possible to extend the gtk javascript to allow tunnelling bespoke commands through to LibreOffice to allow the wrapping of custom UI, but still the work to provide user interface that is expected on the web would be significant.

Having said all this, Broadway was a great basis to prove the feasibility of the concept - and we re-use the underlying concepts; in particular the use of web sockets to provide the low-latency interactions we need. Broadway also worked surprisingly well from eg. a nearby Amazon cloud datacentre. Similarly having full-fidelity rendering - is a very attractive proposition, independent of the fonts, or setup of the client.

An improved approach

Caching document views

One of the key realisations behind LibreOffice On-Line is that much of document editing is not the modification itself; a rather large proportion of time is spent reading, reviewing, and browsing documents. Thus by exposing the workings of document rendering to pixels squares (tiles) via LibreOfficeKit we can cache large chunks of the document content both on the server, and in the client's browser. As the users read though a document, or re-visit it, there is no need to communicate at all with the server, or even (after an initial rendering run) to have a LibreOfficeKit instance around there either.

Thus in this mode, the ability of the browser's Javascript to understand things about the document itself allows us to move much

Michael Meeks: 2015-03-25 Wednesday

21:00 UTCmember

  • Happy Document Freedom Day - great to see Collabora partner: write some helpful thoughts about it. Of course we have a nice banner / wrap - and a custom LibreOffice theme that looks like this for the event:
    LibreOffice with Document Freedom Day theme


So what’s the strongest program you can make with minimum effort and code size while keeping maximum clarity? Chess programers were exploring this for long time, e.g. with Sunfish, and that inspired me to try out something similar in Go over a few evening recently:


Unfortunately, Chess rules are perhaps more complicated for humans, but much easier to play for computers! So the code is longer and more complicated than Sunfish, but hopefully it is still possible to understand it for a Computer Go newbie over a few hours. I will welcome any feedback and/or pull requests.

Contrary to other minimalistic UCT Go players, I wanted to create a program that actually plays reasonably. It can beat many beginners and on 15×15 fares about even with GNUGo; even on 19×19, it can win about 20% of its games with GNUGo on a beefier machine. Based on my observations, the limiting factor is time – Python is sloooow and a faster language with the exact same algorithm should be able to speed this up at least 5x, which should mean at least two ranks level-up. I attempt to leave the code also as my legacy, not sure if I’ll ever get back to Pachi – these parts of a Computer Go program I consider most essential. The biggest code omission wrt. strength is probably lack of 2-liberty semeai reading and more sophisticated self-atari detection.

P.S.: 6k KGS estimate has been based on playtesting against GNUGo over 40-60 games – winrate is about 50% with 4000 playouts/move. Best I can do… But you can connect the program itself to KGS too:


Peter Cannon: Tee Shirt

12:47 UTC


@angryearthling “If you can read this you’re not dick_turpin, he’s illiterate!”

Flattr this!

Frank Karlitschek: Scaling

08:36 UTCmember


I’ve visited both FOSDEM and SCALE over the last weeks, where spoke with dozens of people and gave talks about ownCloud 8. We’ve been getting a lot of positive feedback on the work we’re doing in ownCloud (thanks!) and that has been very motivating.

Does it scale?

A question which comes up frequently is: “What hardware should I run ownCloud on?” This sounds like a simple questions but if you give it a second thought, it is actually not so easy to answer. I had a small cubox on the booth as a demonstration that this is a way to run ownCloud. But development boards like the Raspberry Pi and the cubox might give the impression ownCloud is only suitable for very small installations – while in reality, worlds’ largest on-premise sync and share deployment has brought ownCloud to 500,000 students! So, ownCloud scales, and that introduces the subject of this blog.

If you look up the term scalability on wikipedia you get the explanation that software scales well if you get double the performance gain out of it if you throw twice the hardware at the problem. This is called linear scalability, and rarely if ever achieved.

The secret to scalability

ownCloud runs on small Raspberry Pi’s for your friends and family at home but also on huge clusters of web servers where it can serve over hundreds of thousands of users and petabytes of data. The current Raspberry Pi doesn’t deliver blazing fast performance but it works and the new raspberry pi 2 announced last month should be great hardware for small ownCloud deployments. Big deployments like the one in Germany or at CERN are usually ‘spread out’ over multiple servers, which brings us to the secret sauce that makes scalable software possible.

This secret to building great scalable software is to avoid central components that can be bottlenecks and use components that can easily be clustered by just adding just more server nodes.

How ownCloud scales

The core ownCloud Server is written in PHP which usually runs together with a web server like Apache or ISS on an application server like Linux or Windows. There is zero communication needed between the application nodes and the load can be distributed between different application servers by standard HTTP load balancers. This scales completely linear so if you want to handle double the load because you have double the users, you can just double the number of application servers making ownCloud perfectly scalable software.

Unfortunately an ownCloud deployment still depends on a few centralized components that have the potential to become bottlenecks to scalability. These components are typically the file system, database, load balancer and sometimes session management. Let’s talk about each of those and what can be done to address potential performance issues in scaling them.

File system scalability

The file system is where ownCloud has its data stored, and it is thus very important for performance. The good news is that file

24 March, 2015

Michael Meeks: 2015-03-24 Tuesday

21:00 UTCmember

  • Prep for Document Freedom Day tomorrow; chewed a lot of mail; misc. calls. Late customer call.

Nom : 1000px-I2P_logo.svg.jpg
Affichages : 646
Taille : 39.0 Ko
The Invisible Internet Project, plus connu sous son acronyme I2P, est une surcouche réseau visant à fournir un maximum d'anonymat sur Internet, dans le style de FreeNet ou Tor. Plus petit que ce dernier, I2P


Für die “13. Kieler Open Source und Linux Tage” am 18. und 19. September 2015 suche ich Freiwillige, die mit mir den openSUSE-Stand in Kiel betreuen. Somit wäre openSUSE zum ersten Mal in Norddeutschland auf der KIELUX vertreten. Daher brauche ich eure Mithilfe. :-D

Welche Fähigkeiten und Kenntnisse sollte man mitbringen? Klasse wäre, wenn man sich gut in openSUSE auskennt, um die Fragen (oft auch Anwenderfragen) der interessierten Besucher beantworten zu können und den potenziellen Anwender openSUSE schmackhaft zu machen und mögliche Ängste beim Umstieg abnimmt. Auch Fragen über persönliche Erfahrungen mit openSUSE und die tägliche Arbeit mit dem System kommen vor. Es ist nicht schlimm, wenn man sich in einem Gebiet nicht auskennt, die anderen Standteilnehmer helfen gerne untereinander aus. Ganz wichtig ist, dass der Spaß nicht auf der Strecke bleibt!

Was sollte mitgebracht werden? Für eine Live-Präsentation ist ein Notebook, Tablet-PC oder ein Desktop-PC mit openSUSE sinnvoll. Je mehr Geräte vor Ort sind, um so mehr kann man sie für verschiedene Anwendungszwecke z.B. für Video-Präsentationen einsetzen.

Wieviele Teilnehmer wird für den openSUSE-Stand benötigt? Aus der Erfahrung von den anderen Veranstaltungen werden mindestens 3 Teilnehmer für den Stand benötigt. Erstens um die Stoßzeiten besser abzufedern und zweitens das jeder die Möglichkeit erhält, auch die gewünschten Vorträge zu besuchen, wenn man schon mal dort ist. ;-)

Auf der Veranstaltung werden auch Vorträge und Workshops gehalten. Es wäre super, wenn jemand einen Vortrag zu openSUSE halten kann, um mehr Menschen für openSUSE zu begeistern. Ggfs. werde ich ein Workshop ausarbeiten.

Wie kann ich mitmachen? Einfach unten im Kommentar eine Nachricht mit gültiger E-Mailadresse im E-Mailfeld hinterlassen oder eine E-Mail direkt an
mail (at) sebastian-siebert (punkt) de
senden. Ich werde mich dann mit weiteren Informationen bei dir melden.

Wieso auf einmal “Kieler Open Source und Linux Tage”?

Die Vorgeschichte geht so: In einem Gespräch mit dem Standkollegen Marcel Richter (openSUSE Mitglied und Befürworter) habe ich auf der CLT2015 (Chemnitzer Linux-Tage) erfahren, dass die openSUSE Community nie mit einem Stand auf der KIELUX vertreten war. 8-O Damit lag die Vor-Ort-Präsenz von openSUSE in Norddeutschland so ziemlich brach. :( Als ich unseren Partnerstand Invis-Server, dessen Projekt auf openSUSE aufsetzt, nach dieser Veranstaltung in Kiel fragte, bekam ich als Antwort zu hören, dass das Team schon immer auf die KIELUX geschielt hat. Jedoch wegen dieser Umstände nicht nach Norddeutschland kamen. :-?

Woran hat es gelegen, dass in der Vergangenheit niemand mit einem Stand für openSUSE in Kiel vertritt? Einige Standteilnehmer konnten wegen fehlender Zeit und/oder Geld nur die Ausstellungen in Wohnortnähe aufsuchen. :-|

Die openSUSE-Community sollte nicht nur in West-, Ost- und Süddeutschland auf Open-Source-Veranstaltungen vertreten sein, sondern auch in Norddeutschland. Genau das sollte sich in diesem Jahr ändern und suche ab sofort weitere Standteilnehmer für unseren openSUSE-Stand in Kiel. :wink:

Als openSUSE Mitglied (Member) wie auch Befürworter (Advocate) war ich nahezu auf


For those of you attending the openSUSE Conference in The Hague, we recommend these affordable lodging accommodations for your visit.


Van der Valk Hotel Den Haag – Nootdorp

The Hotel is a short distances from Westvliet.

Double rooms with breakfast are 105 € per night
Single rooms with breakfast are 95 € per night


There is also a sleepover in one of the sport halls at the venue. For 12.50€,  you get a bed, blanket and access to a shower. You can reserve your place during our registration process. This offer is limited to 50 people, so its is on a first come, first serve basis.

Jakub Steiner: High Contrast Refresh

12:36 UTCmember


One of the major visual updates of the 3.16 release is the high contrast accessible theme. Both the shell and the toolkit have received attention in the HC department. One noteworthy aspect of the theme is the icons. To guarantee some decent amount of contrast of an icon against any background, back in GNOME 2 days, we solved it by “double stroking” every shape. The term double stroke comes from a special case, when a shape that was open, having only an outline, would get an additional inverted color outline. Most of the time it was a white outline of a black silhouette though.

Fuzzy doublestroke PNGs of the old HC theme

In the new world, we actually treat icons the same way we treat text. We can adjust the best contrast by controlling the color at runtime. We do this the same way we’ve done it for symbolic icons, using and embedded CSS stylesheet inside SVG icons. And in fact we are using the very same symbolic icons for the HC variant. You would be right arguing that there are specific needs for high contrast, but in reality majority of the double stroked icons in HC have already been direct conversions of their symbolic counterparts.

Crisp recolorable SVGs of the post 3.16 world

While centralized theme that overrides all application never seemed like a good idea, as the application icon is part of its identity and should be distributed and maintained alongside the actual app, the process to create a high contrast variant of an icon was extremely cumbersome and required quite a bit of effort. With the changes in place for both the toolkit and the shell, it’s far more reasonable to mandate applications to include a symbolic/high contrast variant of its app icon now. I’ll be spending my time transforming the existing double stroke assets into symbolic, but if you are an application author, please look into providing a scalable stencil variant of your app icon as well. Thank you!


Gt-shirt-motivet ready for a good time in April and a flashback to old times. openSUSE will have Hackweek April 13 – 17 and everyone is welcome to participate.

All participant will receive the “” openSUSE Hackweek T-Shirt. All participants can sign up on http://hackweek.suse.com to participate in openSUSE’s Hackweek.

Hackerspace will be available for anyone who wants to hack at our locations in Nuremberg, Germany; Prague, Czech Republic; Provo, Utah (USA); Taipei, Taiwan; and Beijing, China, but hackers can always participate in this event remotely.

To use hackerspace during Hackweek, contact the following people based on location by April 10: Provo – Craig Gardner cgardner@suse.com, Prague – Thomas Chvatal tchvatal@suse.cz, Beijing – Yan Sun ysun@suse.com,  Taipei – Max Lin mlin@suse.com and Nuremberg – Douglas DeMaio ddemaio@suse.de.

Join a team working on a project during Hackweek or create your own new project at https://hackweek.suse.com/projects/new.

To help promote your project and the event, use hashtag #hackit when tweeting about your project. Media are welcome to attend the event; openSUSE encourages people participating in Hackweek to blog about it and to contact local media to provide coverage on the event.

23 March, 2015

Michael Meeks: 2015-03-23 Monday

21:00 UTCmember

  • Mail chew, lots of 1:1's. Lunch, team meeting, calls, another team meeting. More calls.


Έχουμε δει πως μπορεί κάποιος να εγκαταστήσει εύκολα το ZNC σε έναν server. Η καλύτερη λύση για σπιτικό server για τέτοια δουλειά, είναι το Raspberry Pi. Δεν καίει σχεδόν τίποτα, είναι αθόρυβο. Μόνο τα φωτάκια του θα σας πειράξουν.

Η διανομή που έχει την τελευταία έκδοση είναι κλασικά η Arch Linux. Όμως τι γίνεται εάν κάποιος έχει openSUSE;
Καταρχήν, δείτε πως γίνεται η εγκατάσταση openSUSE στην κάρτα SD σας (ή από το Αγγλικό wiki).

Τώρα πρέπει να κάνουμε compile του ZNC γιατί δεν υπάρχει στα αποθετήρια για το Raspberry Pi.

Αρχικά εγκαταστήστε τα παρακάτω:

zypper in gcc-c++ gcc git libopenssl-devel make

Πάμε τώρα να κατεβάζουμε την τελευταία έκδοση (θα την βρίσκετε στην επίσημη σελίδα):

wget http://znc.in/releases/znc-latest.tar.gz

Μετά αποσυμπιέστε το αρχείο:

tar -xzvf znc*.*gz

Μπείτε στον κατάλογο

cd znc*

και εκτελέστε την εντολή:


Η επόμενη εντολή, θα κρατήσει αρκετή ώρα:


Αφού τελειώσει, εκτελέστε την τελική εντολή:

make install

Και είστε έτοιμοι. Τώρα μπείτε στον χρήστη και εκτελέστε:

znc --makeconf

για να ρυθμίσετε το ZNC όπως το θέλετε.
Εάν έχετε προηγούμενη ρύθμιση, όλα αυτά κρατιούνται σε έναν κρυφό κατάλογο ~/.znc. Οπότε μπορείτε απλά να τον αντιγράψετε στον κατάλογο του χρήστη σας και να εκτελέσετε την εντολή znc για να εκτελεστεί το πρόγραμμα.


Richard Stallman dió la charla inaugural de LibrePlanet 2015, y aqui puedes verla.


Tal como pudiste leer en este blog, el fin de semana del 21 y 22 de Marzo de 2015 se celebró la conferencia anual de LibrePlanet. Una conferencia que reune a desarrolladores, activistas, y entusiastas del software libre de todo el mundo.

Si no pudiste asistir, hubo la posibilidad de seguir las charlas via streaming. Eran varios los temas tratados, y los ponente que participaban.

LibrePlanet está apoyado y organizado por la Free Software Foundation (FSF) cuyo presidente es como bien sabes Richard Stallman. En esa charla de inauguración que tuvo lugar el sábado 21 de Marzo, Richard Stallman habló del software y hardware libre, entre muchos otros temas.

Por aqui te traigo el vídeo de esa charla (en inglés), que he subido a Archive.org, para que le eches un vistazo si te parece interesante. Desde esa misma página puedes descargarlo en formato de vídeo libre .ogv

Falta la parte inicial del vídeo donde John Sullivan, el director ejecutivo de la FSF hace una pequeña presentación…

La charla propiamente dicha, no dura más de 20 minutos, y después Richard Stallman responde a preguntas de los asistentes. Os dejo con el vídeo:

Lee en español los dos artículos de Richard Stallman sobre Hardware libre:

Si quieres ver más vídeos del encuentro visita el siguiente enlace:



For quite some time, I'm working on new UI for Weblate. As the time is always limited, the progress is not that fast as I would like to see, but I think it's time to show the current status to wider audience.

Almost all pages have been rewritten, the major missing parts are zen mode and source strings review. So it's time to play with it on our demo server. The UI is responsive, so it works more or less on different screen sizes, though I really don't expect people to translate on mobile phone, so not much tweaking was done for small resolutions.

Anyway I'd like to hear as much feedback as possible :-).

Filed under: English phpMyAdmin SUSE Weblate | 2 comments | Flattr this!


For quite some time I was pretty confident that Weblate will need some UI rewrite at some point. This is always problematic thing for me as I'm no way an UI designer and thus I always hope that somebody else will do that. I've anyway spent few hours on train home from LinuxTag to check what I could do with that.

The first choice for me was to try Twitter Bootstrap as I've quite good experience with using that for UI at work, so I hoped it will work quite well for Weblate as well. The first steps went quite nicely, so I could share first screenshots on Twitter and continue to work on that.

After few days, I'm quite happy with basic parts of the interface, though the most important things (eg. the page for translating) are still missing. But I think it's good time to ask for initial feedback on that.

Main motivation was to unite two tab layout used on main pages, which turned out to be quite confusing as most users did not really get into bottom page of the page and thus did not find important functions available there. So all functions are accessible from top page navigation, either directly or being in menu.

I've also decide to use colors a bit more to indicate the important things. So the progress bars are more visible now (and the same progress bar now indicates status of translation per words). The quality checks also got their severity, which in turn is used to highlight the most critical ones. The theme will probably change a bit (so far it's using default theme as I did not care much to change that).

So let's take a look at following screenshot and let me know your thoughts:

Number of applications over time

You can also try that yourself, everything is developed in the bootstrap branch in our Git repository.

Filed under: English phpMyAdmin SUSE Weblate | 4 comments | Flattr this!

This is a follow-up on a post about the limited human rationality. In that post I described some facts - just a few - that perhaps left you a little more in doubt about your cognitive abilities. Or at least more aware of the limitations our human condition comes with!


Unfortunately, the mentioned and the many other flaws in our thinking have consequences for decision making in our society, especially when there's money to be made. The lobby of weapon manufacturing is rather stronger than that of companies creating anti-slip mats in showers and car manufacturers, well, security is merely a factor increasing the costs of cars so there's little incentive for them to hammer on that issue either. The combination of our innate inability to judge the likelihood of these and other things to harm us and the financial pressure on politicians results in massive over-spending on what is in essence irrelevant or even dangerous and harming our society. The NSA, for one, stupidity around net neutrality is another and the war on drugs is third rather prominent example. And now Ebola, of course - a disease so unlikely to kill you, you're probably more likely to be killed by a falling piano.

I think it is pretty clear, as I mentioned above, that politics and business happily abuse our lack of rationality. But probably more often, 'the system' causes issues by itself, as the insanely huge political divide in the US shows. It pays of for the media to put extreme people in front of their audience - and today, we have a country where you can't discuss politics at the office because people see the world so vastly different that only conflict can come out of a conversation. Think of the biases I discussed earlier: these world views aren't likely to get fixed easily, either.
Never attribute to malice that which is adequately explained by stupidity.
I don't think anybody set out to create this divide - but it is with us now.

Now indeed, the media are part of a system working against us. They get rewarded for creating an impression of problems; and they are often uninformed and biased themselves. As John Oliver pointed out, we don't put a believer in regular human abductions by aliens in front of a camera to debate a scientist, attempting to give both sides of the debate an equal chance! We pity the few who still don't get that this, and many other issues, are settled.

Yet this is what often happens in areas where science has long come to a conclusion. Not just the moon landing but also vaccinations, global warming, evolution and a great many more things. Take the recent "Snowden wants to come home" media frenzy!

I don't think any of that is intentional. It's the system rewarding the wrong things. We are part of that 'system': we prefer news that supports our view point; and we prefer new and

22 March, 2015

Michael Meeks: 2015-03-22 Sunday

21:00 UTCmember

  • Off to NCC, spoke, back for a quick lunch. J. out to collect an exhausted H. from a fine YFC weekend, watched a program on time with the older babes. Bed early.


Es cierto, ahora somos más vigilados, más controlados. Conocen más de todos nosotros que nunca. Pero cada vez nos importa menos esa privacidad y que esa vigilancia vulnere derechos básicos.


Pero a pesar de eso no protestamos, es más, facilitamos ese rastreo, porque no tenemos nada que ocultar. ¿Y eso por qué?

Porque nos encanta que nos vigilen. Aqui tienes 7 buenas razones por las que tu también terminarás amando la vigilancia y rastreo de tu vida.

¿Ya estás convencido de las bondades del rastreo de tu vida? Pues ale deja esas ideas peregrinas y únete al gran club, deja las disidencias, eso no es “cool” no seas un marginado y déjate llevar…


21 March, 2015

Michael Meeks: 2015-03-21 Saturday

21:00 UTCmember

  • Up, set about getting some ethernet cabling into the new office through an existing hole. Rather pleasant to have a window to look out of, but cold.
  • Plugged away at Colossians 4, oddly biblegateway seems to not respond for me - hopefully due to over-use. Worked on talk for tomorrow until late.


One of the more interesting questions that came up at Pipeline Conference was:

“How can we mitigate the risk of releasing a change that damages our data?”

When we have a database holding data that may be updated and deleted, as well as inserted/queried, then there’s a risk of releasing a change that causes the data to be mutated destructively. We could lose valuable data, or worse – have incorrect data upon which we make invalid business decisions.

Point in time backups are insufficient. For many organisations, simply being unable to access the missing or damaged data for an extended period of time while a backup is restored, would have an enormous cost. Invalid data used to make business decisions could also result in a large loss.

Worse, with most kinds of database it’s much harder to roll back the database to a point in time, than it is to roll-back our code. It’s also hard to or isolate and roll-back the bad data, while retaining the good data inserted since a change.

How can we avoid release-paralysis when there’s risk of catastrophic data damage if we release bad changes?

Practices like having good automated tests and pair programming may reduce the risk of releasing a broken change – but in the worst-case scenario where they don’t catch a destructive bug, how can we mitigate its impact?

Here’s some techniques I think can help.

Release more Frequently

This may sound counter-intuitive. If every release we make has a risk of damaging our data, surely releasing more frequently increases that risk?

There has been lots written about this. The reality seems to be that the more frequent our releases, the smaller they are, which means the chances of one causing a problem are reduced.

We are able to reason about the impact of a tiny change more easily than a huge change. This helps us to think through potential problems when reviewing before deployment.

We’re also more easily able to confirm that a small change is behaving as expected in production. Which means we should notice any undesirable behaviour more quickly. Especially if we are practising monitoring driven development,

Attempting to release more frequently will likely force you to think about the risks involved in releasing your system, and consider other ways to mitigate them. Such as…

Group Data by Importance

Not all data is equally important. You probably care a lot that financial transactions are not lost or damaged, but you may not care quite so much whether you know when a user last logged into your system.

If every change you release is theoretically able to both update the user’s last logged in date, and modify financial transactions, then there’s some level of risk that it does the latter when you intended it to do the former.

Simply using different credentials and permissions to control which parts of your system can modify what data, can increase your confidence in changing less


Alex and I recently gave a talk at Pipeline Conference about our approach of testing in production.

With our limited time we focused on things we check in production. Running our acceptance/integration tests, performance tests, and data fuzzing against our production systems. We also prefer doing user acceptance testing and exploratory testing in production.

In a Continuous-Deployment environment with several releases a day there’s little need or time for staging/testing environments. They just delay the rate at which we can make changes to production. We can always hide incomplete/untested features from users with Feature Toggles

“How do you cope with junk data in production?”

The best question we were asked about the approach of both checking and testing in production, was “How do you cope with the junk test data that it produces?”. Whether it is an automated system injecting bad data into our application, or a human looking for new ways to break the system, we don’t want to see this test data polluting real users’ views or reports. How do we handle this?

Stateless or Read Only Applications

Sometimes we cheat beacause it’s possible to make a separately releasable part of a system entirely stateless. The application is effectively a pure function. Data goes in, data comes out with some deterministic transformation.

These are straightforward to both check and test in production, as no action we take can result in unexpected side-effects. It’s also very hard to alter the behaviour of a stateless system, but not impossible – for example if you overload the system, its performance will be altered.

Similarly, we can test and check read-only applications to our heart’s content without worrying about data we generate. If we can keep things that read and write data separate, we don’t have to worry about any testing of the read-only parts.

Side-Effect Toggles

When we do have side-effects, if we make the side-effects controllable we can avoid triggering them except when explicitly checking that they exist.

For example, an ad unit on a web page is generally read-only in that no action you can perform with it can change it. However, it does trigger side effects in that the advertising company can track that you are viewing or clicking on the ad.

If we had a way of loading the ad, but could disable its ability to send out tracking events, then we can check any other behaviour of the ad without worrying about the side effects. This technique is useful for running selenium webdriver tests against production systems to check the user interactions, without triggering side effects.

In a more complex application we could have the ability to only grant certain users read-only access. That way we can be sure that bots or humans using those accounts can’t generate invalid data.

Data Toggles

Ultimately, if we are going to check or test that our production systems are behaving as we expect, we do need to be able to

Benjamin Weber: Yield Return in Java

18:41 UTCmember


A feature often missed in Java by c# developers is yield return

It can be used to create Iterators/Generators easily. For example, we can print the infinite series of positive numbers like so:

public static void Main()
    foreach (int i in positiveIntegers()) 
public static IEnumerable<int> positiveIntegers()
    int i = 0;
    while (true) yield return ++i;

Annoyingly, I don’t think there’s a good way to implement this in Java, because it relies on compiler transformations.

If we want to use it in Java there are three main approaches I am aware of, which have various drawbacks.

The compile-time approach means your code can’t be compiled with javac alone, which is a significant disadvantage.

The bytecode transformation approach means magic going on that you can’t easily understand by following the code. I’ve been burnt by obscure problems with aspect-oriented-programming frameworks using bytecode manipulation enough times to avoid it.

The threads approach has a runtime performance cost of extra threads. We also need to dispose of the created threads or we will leak memory.

I don’t personally want the feature enough to put up with the drawbacks of any of these approaches.

That being said, if you were willing to put up with one of these approaches, can we make them look cleaner in our code.

I’m going to ignore the lombok/compile-time transformation approach as it allows pretty much anything.

Both the other approaches above require writing valid Java. The threads approach is particularly verbose, but there is a wrapper which simplify it down to returning an anonymous implementation of an abstract class that provides yield / yieldBreak methods. e.g.

public Iterable<Integer> oneToFive() {
    return new Yielder<Integer>() {
        protected void yieldNextCore() {
            for (int i = 1; i < 10; i++) {
                if (i == 6) yieldBreak();

This is quite ugly compared to the c# equivalent. We can make it cleaner now we have lambdas, but we can’t use the same approach as above.

I’m going to use the threading approach for this example as it’s easier to see what’s going on.

Let’s say we have an interface Foo which extends Runnable, and provides an additional default method.

interface Foo extends Runnable {
    default void bar() {}

If we create an instance of this as an anonymous inner class we can call the bar() method from our implementation of run();

Foo foo = new Foo() {
    public void run() {

However, if we create our implementation with a lambda this no longer compiles

Foo foo = () -> {
    bar(); // java: cannot find symbol. symbol: method bar()

This means we’ll have to take a different approach. Here’s something we can do, that is significantly cleaner thanks to lambdas.

public Yielderable<Integer> oneToFive() {
    return yield -> {
        for (int i = 1; i < 10; i++) {
            if (i == 6) yield.breaking();

How can this work? Note the change of the method return type

20 March, 2015

Michael Meeks: 2015-03-20 Friday

21:00 UTCmember

  • Up early; somehow managed to fit an unfeasibly large desk into the car with J's help - to create the nucleus of a downstairs office.
  • Plugged away at E-mail, and VclPtr fixing - finally wrote some documentation of what's going on there; needs some more work before posting to the list though. Plugged away at remaining issues with Noel's help.
  • Bruce & Anne over for lunch.


Имеется диск с тремя разделами sda1 (/boot), sda2 (swap), sda3 (/). Необходимо увеличить размер раздела /boot за счет swap.

Сначала отмонтируем все разделы и проверим файловую систему.

# swapoff  /dev/sda2 
# umount /dev/sda1
# e2fsck /dev/sda1
e2fsck 1.42.6 (21-Sep-2012)
/dev/sda1: clean, 49/14056 files, 47157/56196 blocks

Используем parted для того, чтобы сначала уменьшить и передвинуть второй раздел, а затем расширить первый на освободившееся место. Так как второй раздел - swap, то его мы просто передвинем, не заботясь о содержимом. Иначе говоря, сначала мы совсем сломаем, а потом заново её разметим.

# parted /dev/sda
GNU Parted 2.4
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print
Disk /dev/sda: 120103200s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 63s 112454s 112392s primary ext2 boot, type=83
2 112455s 1686824s 1574370s primary linux-swap(v1) type=82
3 1686825s 120101939s 118415115s primary reiserfs type=83

Для изменения раздела используется команда resize номер_раздела начало конец

(parted) resize 2 224973 1686824
WARNING: you are attempting to use parted to operate on (resize) a file system.
parted's file system manipulation code is not as robust as what you'll find in
dedicated, file-system-specific packages like e2fsprogs. We recommend
you use parted only to manipulate partition tables, whenever possible.
Support for performing most operations on most types of file systems
will be removed in an upcoming release.
(parted) print
Disk /dev/sda: 120103200s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 63s 112454s 112392s primary ext2 boot, type=83
2 224973s 1686824s 1461852s primary linux-swap(v1) type=82
3 1686825s 120101939s 118415115s primary reiserfs type=83

К сожалению в этот момент оно само подмонтировало всё назад, поэтому нужно снова отмонтировать первый раздел. Раздел swap в данный момент уже должен быть работоспособен, потому-что mkswap на нем выполнился сам автоматически. К сожалению, я не нашел способа отключить всю эту самодеятельность.

# umount /dev/sda1
# e2fsck -p /dev/sda1

Снова идем в parted:

# parted
GNU Parted 2.4
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print
Disk /dev/sda: 120103200s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 63s 112454s 112392s primary ext2 boot, type=83
2 224973s 1686824s 1461852s primary linux-swap(v1) type=82
3 1686825s 120101939s 118415115s primary reiserfs type=83

(parted) resize 1 63 224972
WARNING: you are attempting to use parted to operate on (resize) a file system.
parted's file system manipulation code is not as robust as what you'll find in
dedicated, file-system-specific packages like e2fsprogs. We recommend
you use parted only to manipulate partition tables, whenever possible.
Support for performing most operations on most types of file systems
will be removed in an upcoming release.
(parted) quit
Warning: You should reinstall your boot loader before rebooting. Read section 4 of the Parted User documentation for more
Information: You may need to update /etc/fstab.

Готово, parted не только изменил размер раздела, но еще и молча расширил для нас файловую систему, а теперь предупреждает о необходимости обновить загрузчик.


El día de hoy Viernes 20 de Marzo del 2015, aconteció uno de los fenómenos naturales más preciosos (a mi parecer) a la vista del ojo humano: un eclipse total de Sol. Un eclipse solar se da cuando por un juego de la geometría y posiciones de la Tierra, el Sol y la Luna que […]


El buscador DuckDuckGo dona 125.000 dólares a 5 proyectos de software libre.


Buscadores hay muchos (y no, Linux no es un buscador!! ;) ) , y el más conocido y usado sin duda es el de la gran G, pero por este blog ya he hablado mucho de DuckDuckGo.

DuckDuckGo ha publicado hoy que ha realizado una donación de 125.000 dólares a cinco proyectos de software libre y código abierto. Cinco proyectos que en esta ocasión están relacionados con el anonimato y la privacidad del usuario. Veamos cuales son:

25.000$ para la The Freedom of the Press Foundation (Fundación para la libertad de prensa)

Por dar soporte a la herramienta SecureDrop. Una herramienta libre que sirve para que usuarios puedan realizar confidencias a periodistas de manera totalmente anónima.

Esta herramienta fue originalmente desarrollada por el malogrado Aaron Swartz, y esperan que esta donación mejore y expanda la utilización de esta herramienta por parte de confidentes y periodistas.

25.000$ para la Electronic Frontier Foundation

Por su soporte a PrivacyBadger, un complemento para el navegador que impide el rastreo secreto cuando navegas por la red. Cuando te intentan rastrear en muchas páginas sin tu permiso, esta herramienta bloquea ese rastreo. Esta herramienta está actualmente desarrollada por un único desarrollador, y esperan que esta donación sirva como ayuda.

25.000$ para GPG Tools

Por desarrollar GPG Suite una herramienta para sistemas operativos OS X herramienta para el cifrado y descifrado de datos y de correo.

25.000$ para Riseup

Por el soporte a Tails, un sistema operativo “live”, que puedes ejecutar en casi cualquier PC, desde una USB. Y que tiene como prioridad la privacidad y el anonimato. Es recomendable usarla con SecureDrop para periodistas y confidentes.

25.000$ para Girl Develop It (GDI)

Y además de las herramientas anteriores enfocadas en la privacidad del usuario, también han querido contribuir con este programa que trata de involucrar más a las mujeres dentro del mundo del software libre y el código abierto.




For those reading my blog for the first time and don't know who I am, allow myself to introduce... myself.

I'm a self-proclaimed expert on the topic of email, specifically MIME, IMAP, SMTP, and POP3. I don't proclaim myself to be an expert on much, but email is something that maybe 1 or 2 dozen people in the world could probably get away with saying they know more than I do and actually back it up. I've got a lot of experience writing email software over the past 15 years and rarely do I come across mail software that does things better than I've done them. I'm also a critic of mail software design and implementation.

My latest endeavors in the email space are MimeKit and MailKit, both of which are open source and available on GitHub for your perusal should you doubt my expertise.

My point is: I think my review carries some weight, or I wouldn't be writing this.

Is that egotistical of me? Maybe a little.

I was actually just fixing a bug in MimeKit earlier and when I went to go examine Mono's System.Net.Mail.MailMessage implementation in order to figure out what the problem was with my System.Net.Mail.MailMessage to MimeKit.MimeMessage conversion, I thought, "hey, wait a minute... didn't Microsoft just recently release their BCL source code?" So I ended up taking a look and pretty quickly confirmed my suspicions and was able to fix the bug.

When I begin looking at the source code for another mail library, I can't help but critique what I find.

MailAddress and MailAddressCollection

Parsing email addresses is probably the hardest thing to get right. It's what I would say makes or breaks a library (literally). To a casual onlooker, parsing email addresses probably seems like a trivial problem. "Just String.Split() on comma and then look for those angle bracket thingies and you're done, right?" Oh God, oh God, make the hurting stop. I need to stop here before I go into a long rant about this...

Okay, I'm back. Blood pressure has subsided.

Looking at MailAddressParser.cs (the internal parser used by MailAddressCollection), I'm actually pleasantly surprised. It actually looks pretty decent and I can tell that a lot of thought and care went into it. They actually use a tokenizer approach. Interestingly, they parse the string in reverse which is a pretty good idea, I must say. This approach probably helps simplify the parser logic a bit because parsing forward makes it difficult to know what the tokens belong to (is it the name token? or is it the local-part of an addr-spec? hard to know until I consume a few more tokens...).

For example, consider the following BNF grammar:

address         =       mailbox / group
mailbox         =       name-addr / addr-spec
name-addr       =       [display-name] angle-addr
angle-addr      =       [CFWS] "<" addr-spec ">" [CFWS] / obs-angle-addr
group           =       display-name ":" [mailbox-list / CFWS] ";"
display-name    =       phrase
word            =       atom / quoted-string
phrase          =       1*word / obs-phrase 


Алексей Федорчук Сим объявляется о начале реализации нового проекта — сочинения электронной книги под предварительным названием Моя дорогая Betsy. Она посвящена дистрибутиву Linux Mint Debian Edition (далее LMDE) версии 2, известным также под именем, вынесенным в заглавие. Анонсируемая книга не претендует на ранг систематического руководства по этому дистрибутиву. Предполагается, что это будет даже не сборник […]

<- Current blog entries