|Apple Watch on my right arm|
|Apple Watch on my left arm with crown swapped|
|Using my the crown with my thumb!|
This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.
To have your blog added to this aggregator, please read the instructions.
Note that you may use those keys with or without pressing the alt key.
|Apple Watch on my right arm|
|Apple Watch on my left arm with crown swapped|
|Using my the crown with my thumb!|
|Toma and me above the altar|
I was working on Question Answering last year. Guess what, I’m still on it!
I threw away my first prototype BlanQA and started building a second system, YodaQA. It currently has reasonable performance of answering about a third of trivia questions properly and listing the correct answer in top five candidates for half of the questions – without doing any googling or binging.
In just a few days, Geekos will kick off the openSUSE Conference in The Hague, Netherlands.
There is much to be excited about for this year’s annual conference. Markus Feilner, a seasoned Linux expert, will open up the conference with a keynote speech in the morning on May 1.
Richard Brown will follow that with a presentation titled “Super Secret SUSE Project”; that’s presentation you’re not going to want to miss.
Later that afternoon, Aaron Seigo will have his keynote speech, which is bound to be equally intriguing.
There will be four rooms used during the event and system administrators will have a chance in one of those rooms try out a new systems management toolkit for Linux from Project Machinery.
The four-day event can be viewed at https://events.opensuse.org/conference/osc15/schedule, but that is not the only schedule for oSC15.
The Kolab Summit will coincide on Saturday and Sunday with oSC15 and Kolab’s Chief Executive Officer Georg Greve will be providing the summit’s keynote speech on May 2.
The schedule for the summit can be viewed at https://conference.kolab.org/kolab-summit/program/schedule.
Having oSC15 together with the Kolab Summit provides a great opportunity for attendees to collaborate and strengthen its existing developers relationship.
There will be a social event on Saturday and Sunday and a good sources tells me there will be some openSUSE Beer at the event.
One of the perks of working at SUSE is hackweek, an entire week you can dedicate working on whatever project you want. Last week the 12th edition of hackweek took place. So I decided to spend it working on solving one of the problems many users have when running an on-premise instance of a Docker registry.
The Docker registry works like a charm, but it’s hard to have full control over the images you push to it. Also there’s no web interface that can provide a quick overview of registry’s contents.
The first goal of Portus is to allow users to have a better control over the contents of their private registries. It makes possible to write policies like:
Portus listens to the notifications sent by the Docker registry and uses them to populate its own database.
Using this data Portus can be used to navigate through all the namespaces and the repositories that have been pushed to the registry.
We also worked on a client library that can be used to fetch extra information from the registry (i.e. repositories’ manifests) to extend Portus’ knowledge.
Right now Portus has just the concept of users. When you sign up into Portus a private namespace with your username will be created. You are the only one with push and pull rights over it; nobody else will be able to mess with it. Also pushing and pulling to the “global” namespace is currently not allowed.
The user interface is still a work in progress. Right now you can browse all the namespaces and the repositories available on your registry. However user’s permissions are not taken into account while doing that.
If you want to play with Portus you can use the development environment managed by Vagrant. In the near future we are going to publish a Portus appliance and obviously a Docker image.
Please keep in mind that Portus is just the result of one week of work. A lot of things are missing but the foundations are solid.
Portus can be found on this repository on GitHub. Contributions (not only code, also proposals, bugs,…) are welcome!
You probably saw this phoronix article which references the log of the #dri-devel channel on freenode. This was an attempt to trash my work on lima and tamil, using my inability to get much in the way of code done, and my unwillingness to throw hackish tamil code over the hedge, against me. Let me take some time to explain some of that from my point of view.
OSEM is an event management web application, tailored to the needs of FOSS conferences. You can visit http://osem.io/ to find out more about it.
You can contribute too!
The guide is based & tested on openSUSE 13.2 and it will help you get started with your development right away!
How to install OSEM
Step 1. Install Ruby & Bundler (version >= ruby 2.1.2)
sudo zypper in ruby rubygem-bundler
Step 2. Install git to your system
sudo zypper in git
Step 3. Clone the repository locally to your machine
git clone https://github.com/openSUSE/osem/
Step 4. Install the basic packages, you will need them for the next steps in order for bundle install to work.
sudo zypper in make ruby-devel libxml2 libxml2-devel libxslt libxslt-devel libmysqlclient-devel libqt4-devel libQtWebKit-devel nodejs
Step 5. Move in the folder osem and install the necessary gems in your local project folder
bundle install –standalone
Step 5.1. Optional: You may need to configure nokogiri, so that bundle install succeeds
bundle config build.nokogiri –use-system-libraries
Step 6. You can also generate your secret keys for devise and the rails app with
bundle exec rake secret
Step 7. Copy the sample configuration files
cp config/config.yml.example config/config.yml
cp config/database.yml.example config/database.yml
cp config/secrets.yml.example config/secrets.yml
Step 8. Setup the database
bundle exec rake db:setup
Step 9. Start your rails server and run OSEM
bundle exec rails server
Step 10. And you are all set! Visit OSEM at http://localhost:3000
…and let the coding begin! The fun starts here!
Finally I want to thank Stella Rouzi for her help!
Have questions? Contact us!
By email: email@example.com
By IRC: irc://freenode.net/osem
Found a bug? Please open a new issue directly in github.
GitHub issue tracking is the best, and fastest, way to ensure your bug
will be properly reported and fixed
Have ideas? Develop them and send us a Pull Request with your new feature!
Either way, JOIN US!
I might be the first one that started using ownCloud in Greece. Don't remember the version (I think it was version 4.x.x back in 2011-2012). My main contributions to the project are translation and promotion. For the past years I made many presentations around Greece. You can see my blog is full of tutorials. I also wrote documentation for openSUSE. Finally, I made a huge (in my opinion) contribution to Greek translation.
The past few presentations and all the help I got from the community, I managed to engage more people to contribute to our community. I went to continue translation and I saw that it was 100%.
Update April 2015: Reading it again years later, I regret the tone of this post. I was frustrated at the time and it comes across now as just smarmy. Still, I stand by the principal idea: that you should avoid Python’s daemon threads if you can.
The other day at work we encountered an
unusual exception in our nightly pounder test run after landing some
new code to expose some internal state via a monitoring API. The
problem occurred on shutdown. The new monitoring code was trying to
log some information, but was encountering an exception. Our logging
code was built on top of Python’s
logging module, and
we thought perhaps that something was shutting down the logging system
without us knowing. We ourselves never explicitly shut it down, since
we wanted it to live until the process exited.
The monitoring was done inside a daemon thread. The Python docs say only:
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left.”
Which sounds pretty good, right? This thread is just occasionally grabbing some data, and we don’t need to do anything special when the program shuts down. Yeah, I remember when I used to believe in things too.
Despite a global interpreter lock that prevents Python from being
truly concurrent anyway, there is a very real possibility that the
daemon threads can still execute after the Python runtime has started
its own tear-down process. One step of this process appears to be to
set the values inside
None, meaning that any module resolution results in an
AttributeError attempting to dereference
Other variations on this cause
TypeError to be thrown.
The code which triggered this looked something like this, although with more abstraction layers which made hunting it down a little harder:
try: log.info("Some thread started!") try: do_something_every_so_often_in_a_loop_and_sleep() except somemodule.SomeException: pass else: pass finally: log.info("Some thread exiting!")
The exception we were seeing was an
AttributeError on the
last line, the
log.info() call. But that wasn’t even the
original exception. It was actually another
caused by the
somemodule.SomeException dereference. Because
all the modules had been reset,
Unfortunately the docs are completely devoid of this information, at least in the threading sections which you would actually reference. The best information I was able to find was this email to python-list a few years back, and a few other emails which don’t really put the issue front and center.
In the end the solution for us was simply to make them non-daemon
threads, notice when the app is being shut down and join them to the
main thread. Another possibility for us was to catch
AttributeError in our thread wrapper class – which is what
the author of the aforementioned email does – but that seems like
papering over a real bug and a real error. Because of …
I often see people asking how to contribute to an open source project on GitHub. Some are new programmers, some may be new to open source, others aren’t programmers but want to make improvements to documentation or other parts of a project they use everyday.
Using GitHub means you’ll need to use Git, and that means using the
command-line. This post gives a gentle introduction using the
command-line tool and a companion tool for GitHub called
The basic workflow for contributing to a project on GitHub is:
$ hub clone pydata/pandas
git clone https://github.com/pydata/pandas.git)
This clones the project from the server onto your local machine. When
working in git you make changes to your local copy of the repository.
Git has a concept of remotes which are, well, remote copies of the
repository. When you clone a new project, a remote called
automatically created that points to the repository you provide in the
command line above. In this case,
pydata/pandas on GitHub.
To upload your changes back to the main repository, you push to the remote. Between when you cloned and now changes may have been made to upstream remote repository. To get those changes, you pull from the remote.
At this point you will have a
pandas directory on your machine. All
of the remaining steps take place inside it, so change into it now:
$ cd pandas
The easiest way to do this is with
$ hub fork
This does a couple of things. It creates a fork of pandas in your
GitHub account. It establishes a new remote in your local repository
with the name of your github username. In my case I now have two
origin, which points to the main upstream repository; and
joeshaw, which points to my forked repository. We’ll be pushing to
This creates a place to do your work in that is separate from the main code.
$ git checkout -b doc-work
doc-work is what I’m choosing to name this branch. You can name it
whatever you like. Hyphens are idiomatic.
Now make whatever changes you want for this project.
If you are creating new files, you will need to explicitly add them to the to-be-commited list (also called the index, or staging area):
$ git add file1.md file2.md etc
If you are just editing existing files, you can add them all in one batch:
$ git add -u …
This Hackweek was my first at SUSE’s Nuremberg office, and it was quite an experience. The whole place had a ‘buzz’ about it all week. Every day the company sponsored Lunch or Breakfast, which got everyone together and triggered many interesting conversations. Some times we were serenaded by the ‘SUSE Band‘ who were working on their musical abilities and technology for hackweek.
I had lots of conversations about the upcoming openSUSE conference, some of SUSE’s planned projects around openSUSE, and of course openQA, which was a big part of lots of peoples Hackweek.
Bernhard Wiedemann worked on adding subtitles to openQA’s video recordings so users can see what openQA is doing when it produces the recorded screen output.
Bernhard, Stephan Kulow, and Klaus Kämpf looked at getting openQA to test bare metal hardware.
While openQA already has support for IPMI for bare metal testing of servers, pushing the already established limits is exactly the spirit of Hackweek
Bernhard started experimenting with the idea of using ARM development boards to emulate a keyboard and relay openQAs keyboard commands. Next Hackweek he hopes to have a HDMI grabber so it will be able to see the video output also.
Klaus and Coolo were successful in getting openQA to control hardware with Intel vPro/AMT (as found in Thinkpads and other common laptop/desktop hardware). This demonstration video shows it working on my very own X220 Thinkpad.
And Xudong Zhang from Beijing and myself worked on testing openQA by using openQA
Actually, only a little crazy.
We made life easy for ourselves by creating two disk images (one for SLES 12 and another for openSUSE 13.2) to represent the same production environments we have for http://openqa.opensuse.org and the internal SUSE openQA instance.
These images were created following the regular openQA Documentation and setup to test a ‘known good’ Distribution with a static set of tests (openSUSE 13.2 with the tests and needles from our GitHub project)
We then configured openQA to treat this disk image as a different ‘distribution’, and we wrote new tests for testing openQA (See the source code HERE)
A little bit of ‘needling’ later (Capturing the reference screenshots and defining the areas of interest to openQA, which can be found HERE) we had a working test run which was able to test openQA running on openSUSE 13.2
and also for testing openQA on SLES 12
These tests successfully test all the core functionality of openQA, including Upgrading from the official OBS repository, Confirming the Worker is running, Scheduling an openQA job from the shell, Confirming a Job is running …
Tuning PIDs is one of those things you really don’t want to do, but can’t avoid it in the acrobatic quad space. Flying camera operators don’t usually have to deal with this, but the power/weight ratio is so varied in the world of acro flying you’ll have hard time avoiding it there. Having a multirotor “locked in” for doing fast spins is a must. Milliseconds count.
So what is PID tuning? The flight controller’s job is to maintain a certain position of the craft. It has sensors to tell it how the craft is angled and how it’s accellerating, and there’s external forces acting on the quad. Gravity, wind. Then there’s a human giving it RC orders to change its state. All this happens in a PID loop. The FC either wants to maintain its position or is given an updated position. That’s the target. All the sensors give it the actual current state. Magic happens here, as the controller gives orders to individual ESCs to spin the motors so we get to there. Then we look at what the sensors say again. Rinse and repeat.
PID loop is actually a common process you can find in all sorts of computer controllers. Even something as simple as a thermostat does this. You have a temperature sensor and you drive a heater or an air conditioner to reach and maintain a target state.
The trick to a solid control is to apply just the right amount of action to get to our target state. If there is difference between where we are and where we want to be, we need to apply some force. If this difference is smaller, only a small force is required. If it’s big, a powerful force is needed. This is essentially what the P means, proprotional. In most cases, as a controller, you are truly unhappy if you are elsewhere to where you were told to be. You want to correct this difference fast, so you provide a high proportional value/force. However, in the case of a miniquad, the momentum will continue pulling you when you reached your target point and don’t apply any force anymore. At this point the difference occurs again and the controller will start correcting the craft pulling it back in the opposite direction. This results in an unstable state as the controller will be bouncing the quad back and forth, never reaching the target state of “not having to do anything”. The P is too big. So what you need is a value that’s high enough to correct the difference fast, but not as much so the momentum gets you oscillating around the target.
So if we found our P value, why do we need to bother with anything else? Well sadly pushing air around with props is a complicated way to remain stationary. The difference between where you are and where you want to …
There was a recent discussion on the Extreme Programming mailing list kicked off by Ron Jeffries saying he wants his XP back.
The implication being that Extreme Programming is no longer practised, and that most “Agile” organisations are actually practising Flaccid Scrum – some agile process but little of the technical practices from Extreme Programming.
Update: Ron clarifies in the comments that we agree that extreme programming is still practised, but it would be good if it were practised by more teams
I disagree with this premise. Extreme Programming is alive and well, at least here in London. We have XProlo, eXtreme Tuesday Club, XPDay and many other communities dedicated to XP practices under other names like Continuous Delivery and Software Craftsmanship. There are enough organisations practising Extreme Programming for us to organise regular developer exchanges to cross-pollenate ideas. Extreme programming skills such as Test-driven development and continuous-integration are highly in demand skills in Job Descriptions, even if there is much confusion about what these things actually entail.
When I say that Extreme Programming is alive and well, I do not mean we are working in exactly the same way as described in Kent Beck’s Extreme Programming Explained book. Rather, we still have the same values, and have continued to evolve our technical and team practices. Kent Beck says
“my goal in laying out the project style was to take everything I know to be valuable about software engineering and turn the dials to 10.”
Well now we have turned the dials up to eleven. What does modern Extreme Programming look like?
Here are some of the ways we are now more extreme than outlined in Extreme Programming explained.
Update: Apparently XP Teams are so aligned that Rachel has written a similar blog post, covering this in more detail.
XP Explained says “Write all production programs with two people sitting at one machine”. We’ve turned this to eleven by choosing how many people are appropriate for a task. We treat a pair as a minimum for production code, but often choose to work with the whole team around a single workstation.
Mobbing is great when the whole team needs to know how something will work, when you need to brainstorm and clarify ideas and refinements as you build. It also reduces the impact of interruptions as team-members can peel in and out of the mob as they like with minimal disruption, while a pair might be completely derailed by an interruption.
When pair programming it’s encouraged to rotate partners regularly to ensure knowledge gets shared around the team and keep things fresh. Mobbing obviates the need to rotate for knowledge sharing , and takes away the problem of fragmented knowledge that is sometimes a result of pair rotation.
In Extreme Programming explained Kent Beck explains that “XP shortens the release cycle”, but still talks about planning “releases once a Quarter”. It suggests we …
After Monday release of separate Gammu and python-gammu, the obvious task was to get the new package to distributions.
First I've started with Debian packages, what was quite easy as from quite complex CMake + Python package it is now purely CMake and it was mostly about removing stuff. Soon the updated Gammu package was uploaded to experimental. Once having that ready, I've also update the backports for Ubuntu and these are available in Gammu PPA. Creating new python-gammu package was a bit harder as this is the first Python 3 compatible package I've created, but it's now ready and sitting in the NEW queue.
While working on python-gammu package, I've realized that some of the data used in testsuite are missing in the tarball. While not being critical, this is definitely not nice, so I've decided to release python-gammu 2.1 today. It also includes fixes for some corner cases found by coverity.
For openSUSE the packaging was quite easy as well, stripping out unneeded parts of Gammu package went smoothly and it's now in hardware project, SR to Factory is pending. With python-gammu it turned out to be much harder as the testsuite had failed there with some strange error coming out of libdbi. After looking deeper into it, the problem is in new return type available in Git snapshot openSUSE is shipping. Fortunately producing fix was quite easy, so next Gammu upstream will handle that properly and package in hardware project is already patched. You can now use python-python-gammu from devel:languages:python and SR to Factory is pending as well.
I’ve got some time into the hobby to actually share some experiences that could perhaps help someone who is just starting.
I like cheap parts just like the next guy, but in the case of electronics, avoid it. Frame is one thing. Get the ZMR250. Yes it won’t be near as tough as the original Blackout, but it will do the job just fine for a few crashes. Rebuilding aside, you can get about 4 for the price of the original. Then the plates give. But electronics is a whole new category. If you buy cheap ESCs they will work fine. Until they smoke mid flight. They will claim to deal with 4S voltage fine. Until you actually attach a 4S and blue smoke makes its appearance. Or you get a random motor/ESC sync issue. And for FPV, when a component dies mid flight, it’s the end of the story if it’s the drive (motor/esc) or the VTX or a board cam.
No need to go straight to T-motor, which usually means paying twice as much of a comparable competitor. But avoid the really cheap sub $10 motors like RCX, RCTimer (although they make some decent bigger motors), generic chinese ebay stuff. In case of motors, paying $20 for a motor means it’s going to be balanced and the pain of vibration aleviated. Vibrations for minis don’t just ruin the footage due to rolling shutter. They actually mess up the IMU in the FC considerably. I like Sunnysky x2204s 2300kv for a 3S setup and the Cobra 2204 1960kv for a 4S. Also rather cheap DYS 1806 seem really well balanced.
Rate mode is giving up the auto-leveling of the flight controller and doing it yourself. I can’t imagine flying line of sight (LOS) on rate, but for first person view (FPV) there is no other way. NAZE32 has a cool mode called HORI that allows you to do flips and rolls really easily as it will rebalance it for you, but flying HORI will never get you the floaty smoothness that makes you feel …
This week is Hackweek 12 at SUSE
My hackweek project is improving GNOME password management, by investigating password manager integration in GNOME.
Currently, I'm using LastPass which is a cloud-based password management system.
It has a lot of very nice features, such as: