Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.

01 April, 2015

Michal Čihař: Link

21:00 UTC


Michal Čihař: Link

21:00 UTC


Michal Čihař: Link

21:00 UTC


Michal Čihař: Link

21:00 UTC


Michal Čihař: Link

21:00 UTC


Michal Čihař: Link

21:00 UTC


Michal Čihař: Link

21:00 UTC


Michal Čihař: Link

21:00 UTC


Jos Poortvliet: Link

21:00 UTCmember


This is part one of two blogs about how we make decisions and how our lack of rationality results in much of the mess we have today. I'll start with addressing the title of the blog - spying.

Much has been said about NSA by very qualified people like Bruce Schneier, comparing the NSA to
the Maginot Line (..) in the years before World War II: ineffective and wasteful.
The costs in terms of civil liberties and money resulted in one confirmed case where somebody was caught thanks to NSA spying (probably unfairly, though).

Like most technical people, I'm not impressed but very worried about the erosion of our civil rights, through the NSA spying and in other ways. And I am sure I share with others the impression that if only politicians and the general public knew more about the problem, we wouldn't make such bad decisions.

At the same time, I know I'm probably wrong about that. Like most people, I also care about global warming; health care; poverty; war; and the countless other things arguably Wrong With The World. And collectively, we know all there is to know about them.

Somebody, or a small cabal, must be causing this, then - an argument you often hear about many things gone wrong.

Is there a cabal? Let me invoke Hanlon's razor:
Never attribute to malice that which is adequately explained by stupidity.
Because I think it's the human condition that got us here, not malice of anybody in particular. We just, collectively AND individually, fail at making the right decisions.

So the question should be: what makes us so unreliable? So easy to lie to, especially in groups? Why do we believe conspiracy theories that often require us to believe far more fantastic things than the reality they try to disprove? How can fans kill people only in South Korea?

I'd like to dig into that a bit in this post, more of an essay than a blog, I suppose. The immediate reason is the mess surrounding the NSA (probably not news to most readers of my blog), where one-liners and the inherent complexity of the issue have ensured most people I talk to don't see the problem.

Reality is that the more you know, the harder it is to have a firm opinion. In reality, often conflicts are like Israel vs Palestine - if you pick a side, you're wrong. The complexity of real life issues makes it easy for governments and companies to play people - I feel an urge to point to Russia, but how do we know they are not right claiming Israel-backed Neo-Nazis used US supplied weapons to shoot down Flight 17? You can point out that historically, (neo)Nazis and Jews haven't gotten along very well. That's a fact. But so is the support of the EU for 'political reform' (overthrowing a legitimate, democratically elected government) in Ukraine. How valuable are facts and reason in a


The openSUSE Conference starts in The Hague in one month, and for those who are planning their presentations, we thought we would save you a little bit of time by providing you a template.

Just download the template and POW! A magic chameleon.

For those who want to use their own template for the presentation, we kindly ask you to use the first and last slide in your presentation.

A list of presentations that have been accepted and confirmed can be found at https://events.opensuse.org/conference/osc15/schedule

Cheers and auf Wiedersehen in Den Haag.

31 March, 2015

Google discontinues their OpenID 2.0 service, which is responsible for authenticating you using your Google Account in Studio.

So what does it mean for you ? First of all, you are still able to use your Google Account as usual to logon. Since Google OAuth 2.0 replaces OpenID 2.0 for authenticating your account, you might be prompted to re-consent Studio to verify your Google Account. In your user account information page in Studio an additional Google Account using OAuth 2.0 will show up after successful logon. You might delete your old account marked as deprecated.

Enjoy building!

Your Studio Team

Google OpenID 2.0 deprecated information: 


Greetings, green portion of the GNU/Linux web! It’s been a while since you last heard from us, so let’s say we give you something special this time around.

Wanna Play a Game?All of us users of open and free operating systems tend to celebrate the flexibility of our systems and desktop environments so we can tailor our systems to work best with our specific workflows. Often, we like to show off our desktops to the web vastness. So, how about we play a little game? How about we fill up the web with the best, most innovative desktops, and show off the power and flexibility of our geeko-powered mean machines?

At this point, I’m announcing the beginning of openSUSE’s monthly screenshot contest! Yes geekos, all of your online screenshot boasting will not be in vain. From the 1st of April onward, you’ll be able to enter into the openSUSE’s screenshot contest through one of the official channels described in the rules, which you can read below. Basically, it will work like this: You post a screenshot of your über-pimped desktop to one of the official channels. After three weeks, we’ll gather all of the contestants and create a week-long poll which will decide the desktop-pimp of the month. The winner will then be announced on the news site, with his/her screenshot posted on it, and politely asked to send his/her information so you can receive a small reward to your physical inbox. The reward for your openSUSE love spreading will be a choice between two openSUSE sticker packs. But that’s not all! At the end of the year, will pick-up the year’s monthly winners, and put them up again for public scrutiny, and that way we’ll get our world screenshot champion of the year. The yearly winner shall also be rewarded accordingly. The yearly reward will be announced when the time comes and after we deliberate on how much we should beef it up for the screenshot world series. :)

So the rules are quite simple:

– On the 1st of every month, there will be an appropriate topic opened in the screenshot section of the forums.

– Also, there will be a topic or group opened on the openSUSE Connect site, where you can post your screenshots.

– Since it’s a geeko competition, we kindly ask you to proudly sport your openSUSE desktops.

– You will be able to post your screenshots for three full weeks (3 x 7 days).

– The remainder of the month will be left for a poll on the openSUSE Connect site. The poll will decide a winner. It is entirely possible for the monthly title to go multiple ways, if the voters decide so. Poll will be closed on midnight on the final day of the month, and you’ll be already able to compete the very next day for the next month’s title. Ideally, the winner will be announced on the 1st of next month, along

30 March, 2015


And once again, it’s time for another Limba blogpost :-)limba-small

Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution’s native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution.

Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions.

I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already.

So, it’s time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let’s address one important general question first:

How does Limba relate to the GNOME Sandboxing approach?

(If you don’t know about GNOMEs sandboxes, take a look at the GNOME Wiki – Alexander Larsson also blogged about it recently)

First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place.

The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries.

Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects.

Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime).

Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components.

Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components.

Do you have an example of a Limba-distributed application?

Yes! I recently created a set of package for Neverball – Alexander Larsson also created a XdgApp bundle

29 March, 2015

Michal Hrušecký: OBS Screensaver

17:56 UTCmember


screensaverSome of you might know that I was and in part still am a Gentoo user as well. I always found something reassuring in watching terminal with compilation going on. It is a nice sight. Compiler crunching all those sources and preparing something new for you. On some conference I even saw Gentoo guys showing a recording of Gentoo installation – a lot of compilation in there. I really liked it and I thought that it would make a nice screensaver.

So how can I have such a nice experience in binary distribution like openSUSE? All the packages are built by OBS and I get only binaries. No obvious way how to heat up my apartment with my computer. But I can still get the nice almost warm feeling of packages being compiled! Solution is pretty easy, I just configured xscreensaver to use my script and show me what OBS is working on! The outcome is, I have a screensaver that shows in the cool way compilation output of what OBS is working on right now. I still can smell fresh packages being baked, but without heating up my CPU.

How to do it? Quite simple. You need the following simple script:

  1. #!/bin/bash
  3. mkdir -p ~/.obs-saver
  4. cd ~/.obs-saver
  6. while true; do
  7. URL="$(curl --connect-timeout 2 'https://build.opensuse.org/monitor/old' 2> /dev/null | \
  8. sed -n 's|.*/package/live_build_log/\([^"]*\)/\([^/]*\)/\([^/]*\)/\([^/]*\)".*|https://build.opensuse.org/build/\1/\3/\4/\2/_log|p' | \
  9. sed -n "`expr 5 + \( ${RANDOM} \* 10 / 32767 \)` p")"
  10. if [ "$URL" ]; then
  11. curl --connect-timeout 2 "$URL" 2> /dev/null | tee "`date +%s`"
  12. LAST_BL="`ls -1 | tail -n 1`"
  13. if [ "`wc -l "$LAST_BL" | sed 's|\ .*||'`" -lt 5 ]; then
  14. rm "$LAST_BL"
  15. cat "`ls -1 | sort -R | tail -n 1`" 2> /dev/null
  16. else
  17. rm -f "`ls -1 | head -n -10`"
  18. fi
  19. else
  20. cat "`ls -1 | sort -R | tail -n 1`" 2> /dev/null
  21. fi
  22. done

Save it as obs-saver in your ~/bin and make sure it is executable. Then if you are using xscreensaver, select “Phosphor” screen saver and in settings -> advanced, use following command line:

phosphor -root -scale 3 -ticks 5 -delay 2000 -program ~/bin/obs-saver

Now if you are connected to the internet and you will wait for screensaver to kick in, it will randomly select one of the latest packages being build on OBS and it will start showing you it’s build log :-) I hope you will enjoy it as much as I do! Feeling of stuff being compiled without actually wasting that much of electricity is great ;-)

27 March, 2015

The schedule for the upcoming OpenStack Summit 2015 in Vancouver is finally available. Sage and I submitted a presentation about "Storage security in a critical enterprise OpenStack environment". The submission was accepted and the talk is scheduled for Monday, May 18th, 15:40 - 16:20. 

There are also some other talks related to Ceph available:
Checkout the links or the schedule for dates and times of the talks. 

See you in Vancouver!


oSC15The conference date is right around the corner. Part of schedule is out and the organization is running to have everything settled to host visitors and organize the best openSUSE conference ever.

You can still register to be part of the conference also apply for presentation.


But did you miss the first call of TSP? Are you still in dilemma? Go to the conference or not? Do you contribute to openSUSE and want to join the awesome community but money is an issue for you? Don’t worry because TSP is here to help you.


If you missed the first call, we will open the TSP tool to apply for sponshorship from today, March 27th until April 2nd. The results will be announced on April 6th and you have to accept the sponsorship until April 9th.

Send us a request using the TSP tool. Don’t hesitate to apply! the team will decide and help as many lizards as possible. So hurry up!



  • The organization is providing a sleepover at the venue. It’ll cost 50Euros for all days.
  • We want to sponsor as many people as possible, so please check the best deal for the transportation.
  • Keep the deadlines and follow the rules. We don’t want you to miss the deadlines because we might not be able to help you.
  • If you are volunteer or speaker, please add this info into your application. It will be helpfull for the TSP to decide.
  • Do not forget to include the brief description and description. It helps on TSP decision. Also if you include your contribution for the past 12 months it’d be helpful. This is the first thing we check.
  • Fullfil all of your personal info.
  • You need to Register


We hope to see you there!!!

26 March, 2015

Michael Meeks: 2015-03-26 Thursday

11:00 UTCmember

  • Mihai posted a nice blog with a small video of LibreOffice Online in action - hopefully we'll have a higher-resoluton version that doesn't feature some bearded idiot next time.
  • Out to the Dentist for some drilling action.

25 March, 2015

Michal Čihař: Weblate 2.2

23:30 UTC


Weblate 2.2 has been released today. It comes with improved search, user interface cleanup and various other fixes.

Full list of changes for 2.2:

  • Performance improvements.
  • Fulltext search on location and comments fields.
  • New SVG/javascript based activity charts.
  • Support for Django 1.8.
  • Support for deleting comments.
  • Added own SVG badge.
  • Added support for Google Analytics.
  • Improved handling of translation file names.
  • Added support for monolingual JSON translations.
  • Record component locking in a history.
  • Support for editing source (template) language for monolingual translations.
  • Added basic support for Gerrit.

You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.

Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, Gammu, Weblate itself and other projects.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

PS: The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!


In past days, several new free software projects have been added to Hosted Weblate. If you are interested in translating your project there, just follow instruction at our website.

The new projects include:

PS: Added later during the week:

  • Boilr, a cryptocurrency and bullion price alarms for Android
  • SwitchyOmega, a proxy manager and switcher for Chromium

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!


Today we announced a collaboration between IceWarp and Collabora to start the creation of LibreOffice On-Line, a scalable, cloud-hostable, full featured version of LibreOffice. My hope is that this has a huge and positive impact for the Free Software community, the business ecosystem, personal privacy, and more. Indeed, this is really one of the last big missing pieces that needs solving (alongside the Android version which is well underway). But wait - this post is supposed to be technical; lets get back to the code.

A prototype - with promise

At the beginning of the LibreOffice project, I created (for our first Paris Conference) a prototype of LibreOffice On-Line using Alex Laarson's (awesome) GTK+ Broadway - you can still see videos of that around the place. Great as the Broadway approach is (it provides essentially a simple Virtual Desktop model into your browser), the prototype taught us several important things which we plan to get right in LibreOffice On-Line:

  • Performance - the Broadway model has the advantage of presenting the full application UI, however every time we want to do anything in the document - such as selecting, panning, or even blinking the cursor; we had to send new image fragments from the server: not ideal.
  • Memory consumption / Scalability - another side effect of this is that, no matter how un-responsive the user is (how many tabs are you long-term-not-looking-at in your browser right now) it was necessary to have a full LibreOffice process running to be responsive & store the document. That memory consumption naturally significantly limits the ability to handle many concurrent clients.
  • Scripting / web-like UI - it would have been possible to extend the gtk javascript to allow tunnelling bespoke commands through to LibreOffice to allow the wrapping of custom UI, but still the work to provide user interface that is expected on the web would be significant.

Having said all this, Broadway was a great basis to prove the feasibility of the concept - and we re-use the underlying concepts; in particular the use of web sockets to provide the low-latency interactions we need. Broadway also worked surprisingly well from eg. a nearby Amazon cloud datacentre. Similarly having full-fidelity rendering - is a very attractive proposition, independent of the fonts, or setup of the client.

An improved approach

Caching document views

One of the key realisations behind LibreOffice On-Line is that much of document editing is not the modification itself; a rather large proportion of time is spent reading, reviewing, and browsing documents. Thus by exposing the workings of document rendering to pixels squares (tiles) via LibreOfficeKit we can cache large chunks of the document content both on the server, and in the client's browser. As the users read though a document, or re-visit it, there is no need to communicate at all with the server, or even (after an initial rendering run) to have a LibreOfficeKit instance around there either.

Thus in this mode, the ability of the browser's Javascript to understand things about the document itself allows us to move much

Michael Meeks: 2015-03-25 Wednesday

21:00 UTCmember

  • Happy Document Freedom Day - great to see Collabora partner: write some helpful thoughts about it. Of course we have a nice banner / wrap - and a custom LibreOffice theme that looks like this for the event:
    LibreOffice with Document Freedom Day theme


So what’s the strongest program you can make with minimum effort and code size while keeping maximum clarity? Chess programers were exploring this for long time, e.g. with Sunfish, and that inspired me to try out something similar in Go over a few evening recently:


Unfortunately, Chess rules are perhaps more complicated for humans, but much easier to play for computers! So the code is longer and more complicated than Sunfish, but hopefully it is still possible to understand it for a Computer Go newbie over a few hours. I will welcome any feedback and/or pull requests.

Contrary to other minimalistic UCT Go players, I wanted to create a program that actually plays reasonably. It can beat many beginners and on 15×15 fares about even with GNUGo; even on 19×19, it can win about 20% of its games with GNUGo on a beefier machine. Based on my observations, the limiting factor is time – Python is sloooow and a faster language with the exact same algorithm should be able to speed this up at least 5x, which should mean at least two ranks level-up. I attempt to leave the code also as my legacy, not sure if I’ll ever get back to Pachi – these parts of a Computer Go program I consider most essential. The biggest code omission wrt. strength is probably lack of 2-liberty semeai reading and more sophisticated self-atari detection.

P.S.: 6k KGS estimate has been based on playtesting against GNUGo over 40-60 games – winrate is about 50% with 4000 playouts/move. Best I can do… But you can connect the program itself to KGS too:


Peter Cannon: Tee Shirt

12:47 UTC


@angryearthling “If you can read this you’re not dick_turpin, he’s illiterate!”

Flattr this!

Frank Karlitschek: Scaling

08:36 UTCmember


I’ve visited both FOSDEM and SCALE over the last weeks, where spoke with dozens of people and gave talks about ownCloud 8. We’ve been getting a lot of positive feedback on the work we’re doing in ownCloud (thanks!) and that has been very motivating.

Does it scale?

A question which comes up frequently is: “What hardware should I run ownCloud on?” This sounds like a simple questions but if you give it a second thought, it is actually not so easy to answer. I had a small cubox on the booth as a demonstration that this is a way to run ownCloud. But development boards like the Raspberry Pi and the cubox might give the impression ownCloud is only suitable for very small installations – while in reality, worlds’ largest on-premise sync and share deployment has brought ownCloud to 500,000 students! So, ownCloud scales, and that introduces the subject of this blog.

If you look up the term scalability on wikipedia you get the explanation that software scales well if you get double the performance gain out of it if you throw twice the hardware at the problem. This is called linear scalability, and rarely if ever achieved.

The secret to scalability

ownCloud runs on small Raspberry Pi’s for your friends and family at home but also on huge clusters of web servers where it can serve over hundreds of thousands of users and petabytes of data. The current Raspberry Pi doesn’t deliver blazing fast performance but it works and the new raspberry pi 2 announced last month should be great hardware for small ownCloud deployments. Big deployments like the one in Germany or at CERN are usually ‘spread out’ over multiple servers, which brings us to the secret sauce that makes scalable software possible.

This secret to building great scalable software is to avoid central components that can be bottlenecks and use components that can easily be clustered by just adding just more server nodes.

How ownCloud scales

The core ownCloud Server is written in PHP which usually runs together with a web server like Apache or ISS on an application server like Linux or Windows. There is zero communication needed between the application nodes and the load can be distributed between different application servers by standard HTTP load balancers. This scales completely linear so if you want to handle double the load because you have double the users, you can just double the number of application servers making ownCloud perfectly scalable software.

Unfortunately an ownCloud deployment still depends on a few centralized components that have the potential to become bottlenecks to scalability. These components are typically the file system, database, load balancer and sometimes session management. Let’s talk about each of those and what can be done to address potential performance issues in scaling them.

File system scalability

The file system is where ownCloud has its data stored, and it is thus very important for performance. The good news is that file

24 March, 2015

Michael Meeks: 2015-03-24 Tuesday

21:00 UTCmember

  • Prep for Document Freedom Day tomorrow; chewed a lot of mail; misc. calls. Late customer call.


For those of you attending the openSUSE Conference in The Hague, we recommend these affordable lodging accommodations for your visit.


Van der Valk Hotel Den Haag – Nootdorp

The Hotel is a short distances from Westvliet.

Double rooms with breakfast are 105 € per night
Single rooms with breakfast are 95 € per night


There is also a sleepover in one of the sport halls at the venue. For 12.50€,  you get a bed, blanket and access to a shower. You can reserve your place during our registration process. This offer is limited to 50 people, so its is on a first come, first serve basis.

Older blog entries ->