Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.

06 February, 2016


Reviewing patches for the OpenStack CI infrastructure, there's one piece that often confuse contributors: The question how Zuul and Jenkins configuration are working together.

While we have the Infra Manual with a whole page on how to create a project - and I advise everyone to read it - , let me try to tackle the specific topic of adding new jobs from a different angle.

What we're discussing here are job, or tests, that are run. Jenkins actually runs these jobs. Zuul watches for changes in gerrit (URL for OpenStack is review.openstack.org) to trigger the appropriate jobs so that Jenkins runs them.

To understand the relationship between these two systems, let's try as an analogy programming languages: As a developer, you create a library of functions that do a variety of actions. You also write a script that uses this library to execute them. Jenkins can be considered the library of test functions. But just defining these is not enough, you have to call them. Zuul takes care of calling them, so in the analogy is your script.

So, to actually get a job running for a repository, you first need to define it in the Jenkins "library", and then you trigger its invocation in Zuul. You can also add certain conditions to limit when the job runs or whether it is voting.

If you dig deeper into Jenkins and Zuul, keep in mind that these are two different programming languages, even if both use YAML as format. Jenkins runs jobs and these are defined as text files using the Jenkins job builder. To define them, you can write a job, or use a job-template and instantiate it, or group several job-template in a job-group and instantiate that job-group to create with a few lines many jobs. Zuul uses these jobs and has as syntactic sugar templates to reuse jobs and the queues they run in.

Let's look at a simple examples, adding a new docs job to your repository called amazing-repo:

  1. Check out the project-config repository and make it ready for patch submission like creating a branch where you work on.
  2. Since for the docs job already a template exists, you can reuse it. It is called'gate-{name}-docs', so add it to your repository in file jenkins/jobs/projects.yaml:
    - project:
      name: amazing-repo
      node: bare-trusty
          - gate-{name}-docs

  3. Now define how to trigger the job. Edit file zuul/layout.yaml and update your repository entry to add the job:

    - name: openstack/amazing-repo
        - name: merge-check
        - gate-amazing-repo-docs
        - gate-amazing-repo-docs

    This adds the job to both the check and gate queue. So, it will notonly be run when a patch is initially submitted for review in the check queue but also after a patch gets approved in the gate queue. Since your tree might be different when you submitted a change and when it merges, we run jobs in both situations so that the tree istested exactly as it merges.
  4. Let's go


El padre del software libre, Richard Stallman, visitará Barcelona este mes de febrero de 2016


Richard Stallman está permanentemente difundiendo el software libre por todo el mundo dando charlas. Algunas técnicas, y otras no. Pero creo que es interesante conocer de primera mano lo que este personaje revolucionario del mundo de la informática tiene que contar.

Y este próximo 20 de febrero de 2016, Richard Stallman recalará en Barcelona, para dar una charla no técnica.

Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esta charla de Richard Stallman no será técnica y será abierta al público.

Lugar: Facultad de Bibliotecomania i documentació, called Melcior de Palau, 140, 08014 Barcelona, España.

Info: http://www.fsf.org/events/rms-20160220-barcelona

Yo acudí el año pasado, cuando visitó Oviedo, y la verdad lo recomiendo. Aquí tienes mi crónica:

Así que si tenéis oportunidad no dejéis de asistir. Y pujad al final de la charla en la subasta del pequeño GNU de peluche! :)


Devices at our booth
After rocking SCALE, FOSDEM was next and a great event. Killing, too - two days with about 8000 people, it was insane. Lots of positive people again, loads of stuff we handed out so we ran out on Sunday morning - and cool devices at the ownCloud booth.


When we still had stickers and Jan still liked me
We had quite a team at the booth, with Frank Karlitschek, Philippe Hemmel, Jan-C Borghardt, Lukas Reschke and myself. Lukas visited his first FOSDEM and even though he started to complain a bit on Sunday about having had to many social interactions, he enjoyed it. Philippe was at his first ownCloud booth but has helped out at booths before so that went entirely smooth and Jan - well, he's so popular, people were nice to me a few times thinking I was Jan. I had to disappoint them, Jan was often to be found in the Design devroom where he gave a talk about how we do design at ownCloud (see also our earlier blog about 6 ownCloud User Interaction Design Principles).

Lukas and cameras don't go together well
My experience was the usual FOSDEM rush with so many people already there at 9:30 on Saturday (even though it is supposed to start at 10:00) that you barely have time to think, eat & drink or walk around and talk to old friends. I already had a long day on Friday as I went to a community statistics workshop by Bitergia but I'd even be tired after FOSDEM if I had a week to sleep in before...


Frank pushes press away ;-)
We had lots of stuff at the booth. Our usual stickers, flyers and some posters as well as my laptop where people could see ownCloud and sign up to our newsletter (80 new readers, yay). We also had some very cool devices, 2 prototypes from our friends at Western Digital and a spreed.me box, stay tuned as we have some cool news coming from there soon ;-)

Unfortunately, I hadn't brought enough stickers and flyers, we ran out in the morning of Sunday already, as Jan couldn't help but tell me over and over again. Yes, I brought over twice as much as last year but I guess I didn't factor in the growth in popularity of ownCloud... I'll double up again next year. Maybe triple.

It was great to talk to people about ownCloud, the devices, give them stickers and, in rare cases, explain what ownCloud is. Most people who walked by the booth already used ownCloud (yeah, techie crowd!) or are planning to, just one out of 10 has not heard of it. In general, my biggest regret at FOSDEM is that there are still people walking by whom we didn't manage to talk to. Perhaps more of those don't know the awesome that is ownCloud and are put off by the busyness at our booth

Anislbe 2.x 實驗小記

Ansible 2.x 已經釋出一陣子了(12th Jan, 2016 News),
---- 有看到還是有些要調整的地方. 或是有人反應效能的部份
---- 目前環境還沒有用 2.x 的急迫性
---- 大部分的官方版本都在 1.9.x

最近想要花點時間來實驗 VMware Module 的部份, 看了一下 Module 介紹 (http://docs.ansible.com/ansible/list_of_cloud_modules.html ) 幾乎都是 2.x 以上才有.

所以就想說來用個 docker image 來實驗看看.

目前官方的 ansible
ubuntu - 1.9.x

所以想法上就使用 docker 來進行 ansible 2.x  的實驗

Docker 的安裝 with openSUSE 可以參考
使用 zypper 安裝 docker 套件
> sudo   zypper   in   docker
root's password:

The following 2 NEW packages are going to be installed:
 bridge-utils docker

2 new packages to install.
全部下載大小:6.2 MiB。已快取:0 B。 完成操作後,將增加 22.9 MiB 的使用。
要繼續嗎? [y/n/? 顯示所有選項] (y): y

啟動並設定開啟啟動 docker
> sudo  systemctl  start   docker
> sudo  systemctl  enable  docker

確認 docker 執行狀態
> sudo   systemctl   status  docker
docker.service - Docker Application Container Engine
  Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
  Active: active (running) since 六 2016-02-06 17:53:53 CST; 1min 55s ago
    Docs: http://docs.docker.com
接下來就是要在 docker 內實驗 ansible 2.x 版本
因為 ubuntu ppa 已經上 2.x 所以就拿他來進行

下載 ubuntu:14.04.3
$ sudo  docker   pull  ubuntu:14.04.3

確認 docker images
> sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu              14.04.3             6cc0fc2a5ee3        2 weeks ago         187.9 MB

啟動 container
$ sudo  docker  run  -i  -t  ubuntu:14.04.3  /bin/bash

進入 docker
升級到 ansible 2.x ( 在 container 內 )( 參考官網 )
root@a5c571ed19c7:/# apt-get  -y   install   software-properties-common
root@a5c571ed19c7:/# apt-add-repository   -y   ppa:ansible/ansible
root@a5c571ed19c7:/# apt-get  -y   update
root@a5c571ed19c7:/# apt-get  -y   install   ansible

確認 ansible 版本
root@a5c571ed19c7:/# dpkg  -l  | grep  -i   ansible
ii  ansible                             all          A radically simple IT automation platform

完成安裝, 退出 container
root@a5c571ed19c7:/# exit

建立 新 image
> sudo  docker  commit  -m "Ansible 2.x with Ubuntu 14.04.3"  -a  "sakana" a5c571ed19c7 sakana/ansible2.x_ubuntu14043
  • -m 是 commit 說明
  • -a 是作者
  • a5c571ed19c7是剛剛的 container id
  • sakana/ansible2.x_ubuntu14043 是image 名稱

確認剛剛建立的 images
> sudo  docker  images
REPOSITORY                      TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
sakana/ansible2.x_ubuntu14043   latest              a12ef7074325        About a minute ago   300.6 MB
ubuntu                          14.04.3             6cc0fc2a5ee3        2 weeks ago          187.9 MB

既然這個 docker images 日後有可能會用, 順便就把他上傳到 Dockerhub 吧
之後不用重複 build image

登入 dockerhub ( 請先到官網 https://hub.docker.com/ 建立帳號 )
$ sudo  docker  login
Username: 自己的帳號名稱
Email: 自己的e-mail
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded

上傳剛剛建立好的 image 到 dockerhub
> sudo  docker   push   sakana/ansible2.x_ubuntu14043

上傳完成之後就可以直接用 sudo  docker  pull  sakana/ansible2.x_ubuntu14043 直接抓下來了


~ enjoy it

05 February, 2016


Seit November 2013 gibt es openSUSE 13.1. Inzwischen haben schon 13.2 und gerade aktuell openSUSE 42.1 das Licht der Bildschirme erblickt. Eigentlich wäre der Lebenszyklus für openSUSE 13.1 schon im Mai 2015 beendet. Das 13.1 nun doch bis zum 03. Februar 2016 mit regulären Updates versorgt wurde, liegt an Änderungen des Entwicklungszyklus bei openSUSE, wodurch ja doch einige Verschiebungen und Änderungen Einzug gehalten haben. Ursprünglich war der Entwicklungszyklus bei openSUSE mal bei 8 Monaten. Inzwischen sind die letzten beiden Versionen erst immer nach 12 Monaten erschienen. Und weil man den Lebenszyklus zu 13.1 Zeiten mit dem Erscheinen der übernächsten Version plus zwei Monate festgelegt hat, kommt so eine Rechnung zusammen.

Mit dem regulären Lebensende für openSUSE 13.1 ist aber noch nicht wirklich Schluss für 13.1. Es geht weiter mit dem Evergreen Team. Es steht schon länger fest, dass das Evergreen Projekt die openSUSE Version 13.1 für eine längere Unterstützung auserkoren hat und bis November 2016 mit Updates versorgen wird.

Das Evergreen-Projekt sind openSUSE Member, die sich auf die Fahnen geschrieben haben, ausgewählte openSUSE Versionen über deren offiziellen Lebenszyklus hinaus mit Updates zu versorgen und so deren Lebensdauer zu verlängern.

Um eine openSUSE Installationen nach dem regulären Ende auf das Evergreen-Projekt umzustellen, musste man früher einiges an den Repositories umstellen, um an die Updates zu kommen. Für 13.1 steht auf der englischen Evergreen Projektseite (die deutsche Evergreen Projektseite ist leider nicht aktuell)  ist nichts zu tun, nichts durch den Anwender zu ändern. Das ist doch toll. Die Updates kommen in den ganz normalen Repositories an.

Also, weiter gehts bei dem einen oder anderen mit 13.1. Aber machen wir uns nichts vor. Aufgeschoben ist nicht aufgehoben. 😉 Früher oder später muss man sich von 13.1 verabschieden und zu einer neueren Version greifen. 13.2 und 42.1 kann ich nur empfehlen.


http://www.pro-linux.de von Hans-Joachim Baader ,




Wer das ein oder andere Spiel unter Linux spielt oder ein leistungsfähiges, aber kleines Macro-Keyboard sucht, ist mit dem G13 von Logitech sehr gut beraten. Das gute Stück gibt es 74 Euro bei Amazon oder gebraucht für ca. 30 Euro. Die Einrichtung gestaltet sich überaus einfach, da es bereits Treiber für Linux gibt. Zum Erstellen der Macros gibt es eine GUI (in Java), welche einem viel Arbeit abnimmt.

Quelle des Treibers: https://code.google.com/p/linux-g13-driver/
Alternativ: Installationspaket von oYoX: Note: There is a file embedded within this post, please visit this post to download the file.


1. Paket herunterladen und entpacken.

2. g++ installieren:

sudo zypper in gcc-c++   (installiert: gcc48-c++ gcc-c++ )

3. libusb-1_0-0 und libusb-1_0-devel installieren:

sudo zypper in libusb-1_0-0 libusb-1_0-devel

3 b. Alternativ über „Yast2 Software“ installiert:


4. In den Ordner wechseln, dort die Datei „Makefile“ öffnen und folgende Zeile tauschen:

LIBS     = -lusb-1.0

# in

FLAGS = -L /lib64 
LIBS = -lusb-1.0 -l pthread

5. Mit der Konsole in den Ordner wechseln und dort „make“ eingeben
Damit wurde der Treiber erstellt.

6. Nun kann die GUI gestartet werden:


java -jar Linux-G13-GUI2.jar

7. Jetzt wird es etwas tricky, denn die GUI schreibt die Konfiguration in den Home-Ordner des jeweiligen Benutzers, da sie nicht mit root gestartet wird/wurde.
Der G13 Treiber muss jedoch mit root gestartet werden und sucht demnach auch im root-Ordner nach der Konfiguration.

Linux bietet uns hier nun eine wunderbare Lösung: Symbolische Verknüfungen.
Wir legen also im Root-Verzeichnis eine Verknüpfung auf den Konfigurations-Ordner des aktuellen Users an:

sudo ln -s /home/USERNAME/.g13 /root/.g13
# (USERNAME bitte gegen den aktuell angemeldeten Benutzername austauschen)

8. Nachdem das GUI-Fenster wieder geschlossen ist und der symbolische Link erzeugt wurde, kann der Treiber gestartet werden:

sudo ./Linux-G13-Driver

9. Ab jetzt ist es möglich, die GUI auch bei laufendem Treiber zu starten und Tasten anzupassen.

Die Änderungen werden sofort übernommen, allerdings muss nach den Änderungen / Anpassungen der Tasten und Makros das Profil auf der G13 neu geladen werden. Dazu einfach eine andere Profil-Taste (4 Tasten direkt unter dem Display) drücken und dann zum aktuellen Profil zurück kehren. Damit lädt sich die G13 alle Konfigurationen.

Anmerkung zum oYoX Paket

Ich habe die GUI etwas angepasst (Hintergrundbild), sodass die Tastenbelegung besser zu lesen ist. Auch sind zwei Starter (Bash-Scripte) im Paket, welche noch an den jeweiligen User-Pfad angepasst werden müssen.


Note: There is a file embedded within this post, please visit this post to download the file.

ajax loader


Dear Tumbleweed users and hackers,

Last week I slacked and missed to post the review. I will cover both weeks now, covering the snapshots 0126, 0128, 0130 and the just to be released 0203. As you already see from the number of snapshots, we had a lower pace than we’re used to – it was 4 snapshots in 1 week, now it’s in 2 weeks. Reasons for that are mostly that OBS seems to have quite some trouble in churning packages. Despite there being a queue, a lot of workers were idle. On PPC64LE it even seems as if the queue is not getting any smaller at all.

So, what did those 4 snapshots bring us (are just about to bring us):

  • KDE Applications 15.12.1, as promised. It should be complete.
  • KDEPIM5 is now set as default in the patterns
  • Systemd 228 (in snapshot 0203). NOTE: we have seen some cases where the network intarfaces change between predictable and persistent names, notably when updating from 13.2 to TW. Be aware of this when updating.
  • The CD Images can be written to USB again and are bootable. The defective driver has been fixed.
  • PulseAudio 8.0

Quite some changes… and more currently living in stagings. The main points there being:

With all those exciting news I wish you happy hacking on the weekend!

04 February, 2016


Distribuce openSUSE 13.1 dosáhla konce životnosti. Tato verze už nebude dostávat pravidelné, bezpečnostní ani jiné aktualizace včetně opravy chyb. Doporučený je přechod na distribuci openSUSE Leap 42.1. Popřípadě migrovat na openSUSE Tumbleweed.

Verze openSUSE 13.1 byla přesunuta do vývojové větve Evergreen. To znamená, že čas od času vyjdou aktualizace opravující kritické chyby, i přesto se doporučuje přechod na novější verzi neodkládat.

Judytka Antonín


SUSE Linux Enterprise 12 Service Pack 1 was released today. It contains lots of software updates and features. For more information have a look at the release notes of our Server and Desktop version.

SUSE Linux Enterprise 12 SP1 templates

SUSE Studio supports the new SUSE Linux Enterprise release from day one. Just click on the Create appliance link after you log in and select the template you'd like to start with.

SUSE Linux Enterprise Desktop 12 SP1 GNOME

Testdriving SUSE Linux Enterprise 12 SP1 GNOME desktop

As always you can configure, build, testdrive and publish your appliance.

New Packages
SUSE Linux Enterprise 12 SP1 includes now OpenJDK 8. SLE12 was having only the version 7 of OpenJDK. The customers that needed higher version of Java did not have a solution. But don't worry if you still need the OpenJDK 7 packages! OpenJDK 8 was added as an alternative along with OpenJDK 7 so that customers that need it can still use it. They are still available handled by alternatives.

The latest version of SUSE Linux Enterprise 12 updates also PostegreSQL to Version 9.4 which introduces the new pg_update feature. The new pg_update feature simplifies and speeds up the migration to a new PostgreSQL version. For detailed information, please have look at our release notes.

Furthermore the Python script interpreter was updated to version 2.7.9. Main feature is a improved SSL module which has better security checking of X509 certificates used in SSL/TLS communication.

As usual you can upgrade previous SUSE Linux Enterprise versions to the new Service Pack 1. Just go to the start tab of your appliance and click the Upgrade button at the top bar.

Upgrade to SUSE Linux Enterprise 12 SP1 from SLES 12

Upgrade to SUSE Linux Enterprise 12 SP1

In case you are not satisfied with the upgrade you can also rollback to your old appliance version. Just click on Undo upgrade at the bottom of the appliance start tab.

Happy building :)


Ayer 3 de Febrero fue un día singular. Con mi hijo UngaBoy (también conocido como Cesar), logramos observar al amanecer a la Luna visitando a Saturno en las proximidades de la constelación del Escorpión. Una visión magnifica, rodeada del cantar de los pájaros al alborear el día y la urgencia también por salir rumbo al […]

03 February, 2016


Le support officiel d'openSUSE 13.1 est terminé. Comme prévu, l'équipe communautaire Evergreen prend le relais et fournira des correctifs pour quelques mois supplémentaires. Les mises à jour seront ajoutés directement dans le dépôt de mise à jour habituel, pas besoin d'ajouter un dépôt séparé !

Nom : Logo_evergreen.png
Affichages : 761
Taille : 18.8 KoQuelques stats sur la 13.1

Au total, 1242 mises à jour ont été


Este 3 de febrero de 2016 se anuncia que openSUSE 13.1 dejará de tener soporte oficial por parte de SUSE.


Si tienes instalado openSUSE 13.1 en tu equipo ve pensando en instalar una nueva versión, pero con calma, ya que el equipo de Evergreen toma ahora las riendas del soporte a esta versión de openSUSE.

Con el lanzamiento de systemd para esta versión, el equipo de mantenimiento patrocinado por SUSE dejará de dar soporte a la versión 13.1 de openSUSE. Por lo que el soporte oficial acaba.

Pero el equipo Evergreen, un proyecto comunitario que se dedica a dar soporte extendido a algunas versiones de openSUSE, seguirá dando soporte a esta versión, extendiéndola hasta noviembre de 2016.

En esa fecha, ya se quedará sin soporte, y no habrá nuevas actualizaciones de seguridad disponibles para esa versión, por lo que tendrás un sistema operativo obsoleto que sería mejor actualizar a una nueva versión con software actualizado.

openSUSE se lanzó el 19 de noviembre de 2013 (por el blog pudiste tener noticias en primicia del lanzamiento)

Lo que ha hecho que esta versión haya tenido 26 meses de soporte que le ofrecían parches de seguridad y soporte en actualizaciones para solucionar errores en los distintos paquetes que tenía disponible.

Ahora el equipo de Evergreen, formado por un pequeño grupo de usuarios de la comunidad, continuará ofreciendo ese soporte, pero sólo hasta noviembre.

Alguna vez se ha comentado por las listas de correo, si con el modelo actual de desarrollos de openSUSE, que lanza una nueva versión cada año, sigue siendo necesario el aporte de Evergreen. Te recuerdo que las versiones de openSUSE tienen mantenimiento oficial por un periodo de 2 lanzamientos + 2 meses. Es decir, con el actual modo de desarrollo serían 2 años y 2 meses lo que hace un total de 26 meses.

Antes con el ciclo de lanzamientos cada 8 meses quizás podía ser algo más necesario para extender el periodo de soporte, pero ahora, no sé si sigue siendo tan necesario. ¿Qué opináis?

Fue una de las versiones que instalé en mi PC, y aunque fue una instalación algo más complicada que otras veces, pero con el uso diario del equipo pude disfrutar de su estabilidad, a la que estoy acostumbrado.

Además fue la primera versión en la que pudimos disfrutar de un YaST “traducido” a Ruby, algo que a los usuarios finales les es transparente pero que ha servido para que siga mejorando esta gran herramienta de openSUSE.

Larga vida a openSUSE!! :)




I've been asked recently why ownCloud zipps its files instead of tarring them. .tar preserves file permissions, for one, and with tar.gz or tar.bz2 you have compression too.

Good question. Let me start by noting that we actually have both: zip and tar.bz2. But why zip?

A long time ago and far, far away

In the beginning, we used tar.bz2. As ownCloud gained Windows Server support, we added zip. Once we dropped Windows support, we could have killed the zip files. But we had reasons not to: tar is, sadly, not perfect.

Issues with Tar

You see, tar isn't a single format or a 'real' standard. If you have a platform other than plain, modern Linux, think BSD or Solaris, or the weird things you can find on NAS devices, tar files can get you in trouble. Unlike zip, tar files also can have issues with character format support or deep folders. We've had situations where upgrades went wrong and during debugging we found that moving to zip solved the problem miraculously... And, as ownCloud, we're squarely focused on the practical user experience so we keep zip, alongside tar.bz2.

See also the GNU tar manual if you want to know more about the various tar formats and limitations.

Sadly, sometimes it is impossible to find one thing that works for everyone and in every situation.

Tarred turtle pic from wikimedia, Creative Commons license. Yes, that's a different tar, I know. But - save the turtles!


The campaign is over; the votes are counted and three members of the openSUSE community will lead the overall project on the openSUSE Board.

Tomáš Chvátal, Gertjan Lettink, and Bryan Lunduke take the helm with the existing board members of Michal Hrušecký, Kostas Koudaras and chairman Richard Brown.

The new board members each bring different experiences and cultural backgrounds, which will no doubt provide for an exchange of ideas in fulfilling their role of representing the community and the project. The new members are from the Czech Republic, Netherlands and United States, respectively.

Manu Gupta and Efstathios Iosifidis, who ran in this year’s elections, received several votes. Both are great members of the community and will have a chance to run in next year’s elections.

Many thanks to the departing board members Andrew Wafaa, Robert Schweikert and Bruno Friedmann; your efforts and time on the board are valued.

The project can not exist without its members and board, which represent the community. The board helps to resolve conflicts, facilitates decision making processes when needed and communicates with the community and project stakeholders. And yes, like the picture above suggests, occasionally slap each other in the face.

Those who make the decision to volunteer their time and efforts toward representing the project are greatly appreciated. Thank you to all future, past and present board members. Congratulations.


Another three weeks period and another report from the YaST Team (if you don’t know what we are talking about, see highlights of sprint 13 and the presentation post). This was actually a very productive sprint although, as usual, not all changes have such an obvious impact on final users, at least in the short term.

Redesign and refactoring of the user creation dialog

One of the most visible changes, at least during the installation process, is the revamped dialog for creating local users. There is a full screenshots-packed description of the original problems (at usability and code levels) and the implemented solution in the description of this pull request at Github.com.

Spoilers: the new dialog looks like the screenshot below and the openSUSE community now needs to decide the default behavior we want for Tumbleweed regarding password encryption methods. To take part in that discussion, read the mentioned description and reply to this thread in the openSUSE Factory mailing list.


Beyond the obvious changes for the final user, the implementation of the new dialogs resulted in a much cleaner and more tested code base, including a new reusable class to greatly streamline the development of new installation dialogs in the future.

One step further in the new libstorage: installation proposal

In the highlights of the previous sprint, we already explained the YaST team is putting a lot of effort in rewriting the layer that access to disks, partitions, volumes and all that. One important milestone in such rewrite is the ability to examine a hard disk with a complex partitioning schema (including MS Windows partitions, a Linux installation and so on) and propose the operations that need to be performed in order to install (open)SUSE. It’s a more complex topic that it could look at the first glance.

During this sprint we created a command line tool that can perform that task. Is still not part of the installation process and will take quite some time until it gets there, but it’s already a nice showcase of the capabilities of the new library.


Fixed a crash when EULA download fails

If a download error occurred during the installation of any module or extension requiring an EULA, YaST simply exited after displaying a pop-up error, as you can see here.


Now the workflow goes back to the extension selection, to retry or to deselect the problematic extension. Just like this.


Continuous integration for Snapper and (the current) libstorage

Snapper and libstorage now build the Git “master” branch continuously on ci.opensuse.org (S, L), and commit a passing build to the OBS development project (S, L), and if the version number has changed, submit the package to Factory (S, L).

The same set-up works on ci.suse.de (S, L), committing to the SUSE’s internal OBS instance (S, L) and submitting to the future SLE12-SP2 (S, L).

This brings these two packages up to the level of automation that the YaST team uses for

02 February, 2016


Lee esta historia real basada en el sinsentido de leyes de propiedad restrictivas y de estudiantes que se enfrentan a penas de carcel por querer compartir el conocimiento.


A través de la web de la “Electronic Frontier Foundation” (EFF) leo el caso de un estudiante Colombiano llamado Diego Gómez que se enfrenta a duras penas de cárcel (de 4 a 8 años) por compartir el conocimiento por internet.

Las leyes restrictivas en materia de derechos de autor, y los acuerdos que Colombia firmó con EEUU en materia de libre comercia, hacen que este estudiante por compartir sin ánimo de lucro una tesis de otro estudiante, se enfrente a duras penas de carcel.

Ya seas defensor del conocimiento, estudiante, profesor, universitario, o simple usuario que quiere luchar por derechos básicos como el conocimiento público, te recomiendo la lectura de la historia de Diego en este enlace:

La EFF ha puesto en marcha una campaña para dar voz, y tomar la palabra en este caso. Puedes echar un vistazo en este enlace:

Firma la petición y compártela por las redes, o también en el mundo físico. En tu universidad puedes crear carteles, o promover una difusión mediante correo electrónico entre tus profesores o compañeros.

Yo también creo que compartir no es delito ¿y tu? ¿Quieres que el conocimiento sea restringido?



01 February, 2016

Just a small update on the Call for Speakers for the OpenStack Austin summit. The submission period was extended. The new deadline is February 2nd, 2016,11:59 PM PST (February 3rd, 8:59 CEST). You can find more information about the submission and speaker selection process here.


SUSE en su gira mundial por paises de todo el mundo presentando sus productos estará también en España.


SUSE recorrerá ciudades de países de América, Europa, África y Rusia, dando a conocer sus últimas propuestas basadas en soluciones de código abierto para servidores, computación en la nube, Docker, y mucho más en lo que denominan SUSE Expert Days.

Y en esa gira mundial, recalarán en España, en concreto este mes de febrero que hoy comienza estarán el 16 de febrero en Madrid y el 18 de febrero en Barcelona.

Si estás interesado en conocer más lo que propone SUSE, la empresa más veterana dedicada a Linux y el código abierto, en cuanto a sistemas operativos empresariales y para servidores basados en GNU/Linux como es SUSE Linux Enterprise Sever, y otras soluciones como SUSE Manager, SUSE OpenStack Cloud, SUSE Enterprise Storage, no te pierdas la cita.

¿Qué son los SUSE Expert Days? Son eventos de un día de duración con charlas técnicas y demostraciones de sus productos, que ofrecen a los profesionales de las tecnologías información sobre las herramientas que desarrollan y que les pueden ser útiles para desenvolverse en su día a día en sus “data centers”.

Más detalle de los eventos, links de registro, horarios, y lugares en los que se celebrarán lo podrás encontrar en los siguientes enlaces:

El evento que tendrá lugar en Madrid durará más ya que también harán una demostración sobre SUSE Enterprise Storage y SUSE Manager.

Si te interesan los temas bien sea de manera profesional o porque está interesado en conocer lo que ofrece SUSE, reserva ya tu plaza y asiste al evento! Y si acudes pasa por aquí para comentar qué te pareció la experiencia.



Now, it will activelly prevent you from using it unless you enable cookies (with excuse of european data protection laws). So I disabled cookies for google.com. Now google works... for a day... so its now duckduckgo.com for me. Interesting how tricky it was to add into chromium, how well hidden cookie settings are in chromium, and how they change yourself if you are not careful.



Juggling between homework, working on my part time job and searching for a full time sums up my life. As I try to balance my life, there are slumps when I wished I had someone to vent out my frustrations to. But then, everyone is so busy that no one has time for anyone. If a friend has time to talk to you, can I be sure that they won’t judge you for having this super awkward conversation which no one wants to talk about? I can not be sure about this. I wished to have someone to talk outside my peer group, hopefully immediately without scheduling an appointment and without getting judged will provide a bit of support when I needed it the most.

After talking to my peers, I found out that I was not alone. A major portion of undergraduates and graduate students alike report mental stress or related issues because of increasing workloads. As they combat these mental and emotional stress, they want an outlet to vent out their feelings without being judged by their peers. Campus resources are inadequate as they are coupled with long waits. Students have to wait for weeks before making an appointment with the campus psychological services. During these wait times, if the issues are not solved, they get exacerbated.

This was when I was came across 7 Cups. 7 Cups is an on-demand emotional health and well-being service that connects people anonymously with trained listeners. By using this service, people can

  • talk to someone who will not know them
  • talk to groups of people anonymously who have faced the same problem regulated by a listener.
  • talk on various topics ranging from mental disorders to daily activities like College Life.
  • can get some help when they are waiting to be counselled by a professional counselor.


7cups - 1-1 support chat 7cups - Group support chat
The ability to talk to people about various stress sources / mental disorders was incredibly useful. To validate my hypothesis that the application might be useful, I tried it a couple of times and spoke to a listener. I felt lighter and better after our conversation. So, I spoke to various students around the campus and see what they felt about handling stress. Out of the 18 students I spoke to, I received at least 3 or more students who shared that having an outlet will be useful.

Students quoted

[…]the times when I can’t get over the hump on my own, I wish I had someone there to give me just the motivation I needed.

It would  be nice to talk to people, but maybe not here

“Sleep helps, talking to people helps, putting things in perspective also helps.”

Help yourself section

7 cups also provides a set of resources for its users when they do not want to talk to anyone but are looking for specific strategies or resources that can help them manage their problems on their own.

The Help Yourself section provides

  1. a set of exercises that the users can use


I wrote this program for the MITx: 6.00.1x Introduction to Computer Science and Programming Using Python course. This was one of the problem sets and it was a fun and easy program to write so I thought I would share my solution.

To start we need to create or download a wordlist file which contains english words that can be chosen by the AI for hangman. Here is some sample content from the wordlist I used:

a i ad am an as at ax be by do em en ex go he hi ho if in is it me my no of oh on or ox pi re so to up us...

Once we have the wordlist we can setup a couple functions to load the data and then choose a word (this code was provided as part of the exercise).

import random
import string


def loadWordList():
    Returns a list of valid words. Words are strings of lowercase letters.

    Depending on the size of the word list, this function may
    take a while to finish.
    print 'Loading word list from file...'
    # inFile: file
    inFile = open(WORDLIST_FILENAME, 'r', 0)
    # line: string
    line = inFile.readline()
    # wordlist: list of strings
    wordlist = string.split(line)
    print '  ', len(wordlist), 'words loaded.'
    return wordlist

def chooseRandomWord(wordlist):
    wordlist (list): list of words (strings)

    Returns a word from wordlist at random
    return random.choice(wordlist)

# end of helper code
# -----------------------------------

Now lets define some more helper functions that we can use to abstract some of the tasks we need to repeatedly carry out. For starters we should check if the letter a user guesses is in the chosen word.

def checkGuess(guess, secretWord):
    guess: char, a letter that the user guessed
    returns: boolean, True if letter is in the word, False if letter is not

    if guess in secretWord:
        return True
        return False

The next function to implement is see how much of the word has been guessed. The function will return a string that can be printed to show the user what parts and how much of the word they have guessed. It will be in the following format: _ e _ _ o.

def getGuessWord(secretWord, lettersGuessed):
    secretWord: string, the word the user is guessing
    lettersGuessed: list, what letters have been guessed so far
    returns: string, comprised of letters and underscores that represents
      what letters in secretWord have been guessed so far.

    guess = ''

    for i in range(len(secretWord)):
        if secretWord[i] in lettersGuessed:
            guess += secretWord[i]
            guess += '_ '

    return guess

Once the user makes the guess and have not either guessed the word or run out of guessed, the program needs to make sure they know what letters are still available for guessing. This can be implemented as a function:

def getAvailLetters(lettersGuessed):
    lettersGuessed: list, what letters have been guessed so far
    returns: string, comprised of letters that represents what letters have not
      yet been guessed.

    lettersAvailable = ''
    for letter in 'abcdefghijklmnopqrstuvwxyz':
        if letter not in lettersGuessed:
            lettersAvailable += letter 

31 January, 2016

Michael Meeks: 2016-01-31 Sunday.

19:05 UTCmember

  • Up early; breakfast meeting near the ULB. Sat in a corner writing slides at great length - a terrible use of good time at FOSDEM - when I should have been chatting to people. Frustrating to see them passing and have no time; made some progress at least with Kendy's help.
  • Gave my talk; slides as hybrid-PDF below:
    Slides of Scaling and Securing LibreOffice on-line: hybrid PDF

The next OpenStack Summit will take place in Austin, TX, US from April 25-29, 2016. The Call for Speaker period is still open and will close on February 1st , 2016, 11:59 PM PDT (February 2nd, 08:59 CEST). You can submit your presentations here

The process to submit a session proposal changed a little bit since the last summit. You have to provide more information like "What should attendees expect to learn?", "What is the problem or use case you’re addressing in this session?", "Why should this session be selected?", but also links to former presentations. And you can submit no more than 3 proposals. Personally I have to say: This is really an improvement, especially if this information also ends up in the voting process! 

I myself work currently on a proposal to speak about "From Hardware to Application in a OpenStack and Ceph NFV cloud". I hope we will see many interesting proposals for Ceph related talks!

30 January, 2016

Michael Meeks: 2016-01-30 Saturday.

21:00 UTCmember

  • Up; mail chew, breakfast with Kendy; to the venue ... caught up with lots of old friends; exciting times. Announced our exciting partnership with Kolab - really looking forward to working closely together.
  • Out for LibreOffice dinner, not improved by transient dancer - but improved by good company. Back to the hotel late - up until 4:30 am or so working on my talk.

29 January, 2016

Michael Meeks: 2016-01-29 Friday.

21:00 UTCmember

  • Up; off to the hack-fest; lots of hack-fest'y work. Spent some considerable time successfully hacking at per-user memory usage for tilebench - with some considerable success. Discussed some awesome testing work Markus is doing - encouraging.
  • Out for a fine meal with Georg, Aaron & the Kolab guys. Checked in to the hotel, wandered the Delirium crush, back for some more slide'y action.

Jakub Steiner: Rio

12:49 UTCmember


Rio UX Design Hackfest from jimmac on Vimeo.

I was really pleased to see Endless, the little company with big plans, initiate a GNOME Design hackfest in Rio.

The ground team in Rio arranged a visit to two locations where we met with the users that Endless is targeting. While not strictly a user testing session, it helped to better understand the context of their product and get a glimpse of the lives in Rocinha, one of the Rio famous favelas or a more remote rural Magé. Probably wouldn’t have a chance to visit Brazil that way.

Points of diversion

During the workshop at the Endless offices we went through many areas we identified as being problematic in both the stock GNOME and Endless OS and tried to identify if we could converge on and cooperate on a common solution. Currently Endless isn’t using the stock GNOME 3 for their devices. We aren’t focusing as much on the shell now, as there is a ton of work to be done in the app space, but there are a few areas in the shell we could revisit.

GNOME could do a little better in terms of discoverability. We investigated the role of the app picker versus the window switcher in the overview and being able to enter the overview on boot. Some design choices have been explained and our solution was reconsidered to be a good way forward for Endless. Unified system menu, window controls, notifications, lock screen/screen shield have been analyzed.

Endless demoed how the GNOME app-provided system search has been used to great effect on their mostly offline devices. Think “offline google”.

DSC02567 DSC02589 DSC02616

Another noteworthy detail was the use of CRT screens. The new mini devices sport a cinch connection to old PAL/NTSC CRT TVs. Such small resolutions and poor quality brings more constraints on the design to keep things legible. This also has had a nice effect in that Endless has investigated some responsive layout solutions for gtk+ they demoed.

I also presented GNOME design team’s workflow, and the free software toolchain we use. Did a little demo of Inkscape for icon design and wireframing and Blender motion design.

Last but not least, I’d like to thank the GNOME Foundation for making it possible for me to fly to Rio.

Rio Hackfest Photos

Hi ownCloud, KDE and openSUSE peeps!

We will soon be traveling to Brasil to visit family in various places (from Amazonia to Rio Grande do Sul). We'll land in Sao Paulo and stay there between February 9 and 11 - if you're a KDE, ownCloud or openSUSE contributor in that area and want me to try and bring some swag like flyers, stickers and posters for events, we could meet! Perhaps there's time for a lunch or dinner at some point.

Ping me, either here below in the comments or by sending me an email.

Videos from our last trips to Brasil:


Some time back I wrote a patch to KIWI that allows running openSUSE live entirely from RAM(tmpfs).

How to use it?
Pass “toram” parameter at the boot menu. Try it on Li-f-e.

Running the OS from RAM make it lot more responsive than running from DVD or USB device, for example it is most useful for running a demo computer where many users try lot of applications installed in the live system. USB or the DVD can be ejected once the OS is loaded. It can be used to load OS to RAM directly from iso in a virtual machine as well.

Needs enough RAM to copy the entire iso to RAM and then some spare to operate the OS, Li-f-e for instance would need minimum 5G RAM available. It also takes a bit longer to boot as the entire image is copied to RAM.

28 January, 2016

Michael Meeks: 2016-01-28 Thursday.

21:00 UTCmember

  • Up; breakfast with Norbert, quested through the town un-successfully for night/tooth-guards - pharmacies don't sell them, nor the sports-shop I tried; interesting - its enough to make you grind your teeth.
  • Arrived; poked with Lionel at his nasty event ordering bug inconclusively; met lots of fun GNOME guys, lots of LibreOffice hackers. Meeting with Kendy & Miklos, worked through some mail. Out for a dinner nearby in the evening, and back - waffles.


Veamos un tutorial muy simple de cómo copiar unas llaves SSH que hemos generado en un equipo a otro.


En este tutorial no voy a hablar en detalle de qué son las llaves SSH, para eso está la wikipedia, os remito allí y a los enlaces para saber más al respecto:

A groso modo podemos decir que las claves SSH son “llaves” que sirven para darnos acceso remoto a otros equipos. Ya sean PC’s, servidores, o (como en el caso que me ocupa) acceso a repositorios de git de otra gente o comunidades. Las claves SSH por tanto sirven para identificar tu equipo y así tener acceso a recursos remotos.

Las claves (en plural, ya que son un par de claves las que se generan, una pública que puedes compartir y una privada, que debes guardar con recelo) las generas en tu equipo, y después compartes con el administrador de los servicios remotos a los que quieres acceder tu clave pública, así él las incluirá entre las claves confiables y cada vez que quieras acceder al servicio remoto tendrás acceso.

Una vez generadas, en tu equipo con GNU/Linux (en otros sistemas operativos no tengo ni idea) se crea un directorio oculto en el /home de tu usuario llamado .ssh y en el que estarán almacenadas esas claves. La duda me surgió, porque yo generé las claves para tener acceso a un repositorio git remoto y me pidieron mi clave pública para poder tener derechos para acceder a ese repositorio, pero ¿qué pasa si quiero trabajar en ese mismo repositorio desde otra máquina?

No es necesario crear otro par de claves y volver a enviarles mi nueva clave pública. Después de un rato de búsqueda, ví que era sencillo. Lo dejo por aqui apuntado en el blog, por si a ti te puede servir. Yo encontré la solución en este enlace:

Ya había probado a simplemente copiar dicho directorio .ssh de una máquina a la máquina donde quería migrar mis claves, pero al intentar acceder al repositorio con git, me daba un error, algo así como diciéndome que mis claves eran demasiado públicas como para confiar en ellas, todo eso y más pero en inglés.

Por tanto algo más tendría que hacer. Y buscando es cuando dí con el enlace en cuestión. En él explican que efectivamente vale con copiar y pegar ese directorio, pero ya que esos archivos contienen datos importantes deberían poder leidos por el usuario, pero no accesibles para leer, escribir, o ejecutar por otros usuarios, por lo que ssh ignora esas claves si existe ese problema de seguridad.

Por tanto era un tema de permisos de esos archivos. Así que accedes a tu /home/tu_usuario/.ssh y en esa ruta ejecutas:

chmod 600 id_rsa

Con esto

Older blog entries ->