Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Saturday
14 January, 2017


face

Dear Tumbleweed users and hackers,

I hope you all ended up well fed and healthy in the new year. For the last few weeks we have seen quite a slow pace for Tumbleweed, just as pre-announced in my last review of the year 2016. We can surely expect an increased pace again as people from all around the world resume their regular life rhythms. For completeness sake I will cover in this weeks’ review not only this week, but also the few snapshots since my last review. That means, we cover 8 snapshots: from 2016: 1216, 1217, 1219, 1222 and 1226 and from 2017: 0104, 0109 and 0110. Sadly, 0111 and 0112 ran into some issues on openQA – but the issues are to most parts in the testing framework, not the product (from what we know). But not being able to fully confirm it, I did not feel comfortable releasing them into the wild onto you. After all, I know some of you are still having issues with the kernel 4.9 series (but good new on that part is on the horizon). 0112 might still cut it, if we solve the openQA issues in time.

So, let’s see what those snapshots brought you:

  • Some preparations for system 232 to be able to land
  • Pidgin found new maintainers finally – so we have people actually caring for it again
  • KDE Applications 16.12.0
  • KDE Frameworks 5.29.0
  • Qt 5.7.1
  • Plasma 5.8.5
  • Linux Kernel 4.9.0

As mentioned earlier, Linux Kernel 4.9.0 caused reboot failures for a couple of users. Theses issues have since been analyzed and, for people following the bug report, a test kernel made available for those users. The feedback so far looks very promising and we can surely expect a final fix for this rather soon.

There is also quite some things piled up already, full prepared or almost ready to be shipped:

  • X.org server 1.19.0 – will be in snapshot 0112+
  • Systemd 232 – all problems seem resolved, you can expect it shortly
  • Linux Kernel 4.9.3
  • Flatpak 0.8 – Upstream says this is the version Software Vendors should target
  • gstreamer 0.10 is scheduled for removal from Tumbleweed

With so many nice updates prepared, Tumbleweed promises to keep on rolling into a bright Year 2017.


Thursday
12 January, 2017


Michael Meeks: 2017-01-12 Thursday.

11:03 UTCmember

face
  • Mail chew; customer call. Amused to see a paper on The Appendix suggesting it is not a vestigal organ; less amusing my Mother had her spleen removed some years back as another useless / vestigal organ before that too was found to be rather useful.
  • TDF Mac Mini arrived, and I started to set it up to build LibreOffice, hopefully will have some spare time to fix a Mac issue or two.

face
  • Reproducible font rendering for librsvg's tests

    The official test suite for SVG 1.1 consists of a bunch of SVG test files that use many of the features in the SVG specification. The test suite comes with reference PNGs: your SVG renderer is supposed to produce images that look like those PNGs.

    I've been adding test files from that test suite to librsvg as I convert things to Rust, and also when I refactor code that touches code for a particular kind of SVG element or filter.

    The SVG test suite is not a drop-in solution, however. The spec does not specify pixel-exact rendering. It doesn't mandate any specific kind of font rendering, either. The test suite is for eyeballing that tests render correctly, and each test has instructions on what to look for; it is not meant for automatic testing.

    The test files include text elements, and the font for those texts is specified in an interesting way. SVG supports referencing "SVG fonts": your image_with_text_in_it.svg can specify that it will reference my_svg_font.svg, and that file will have individual glyphs defined as normal SVG objects. "You draw an a with this path definition", etc.

    Librsvg doesn't support SVG fonts yet. (Patches appreciated!) As a provision for renderers which don't support SVG fonts, the test suite specifies fallbacks with well-known names like "sans-serif" and such.

    In the GNOME world, "sans-serif" resolves to whatever Fontconfig decides. Various things contribute to the way fonts are resolved:

    • The fonts that are installed on a particular machine.

    • The Fontconfig configuration that is on a particular machine: each distro may decide to resolve fonts in slightly different ways.

    • The user's personal ~/.fonts, and whether they are running gnome-settings-daemon and whether it monitors that directory for Fontconfig's perusal.

    • Phase of the moon, checksum of the clouds, polarity of the yak fields, etc.

    For silly reasons, librsvg's "make distcheck" doesn't work when run as a user; I need to run it as root. And as root, my personal ~/.fonts doesn't get picked up and also my particular font rendering configuration is different from the system's default (why? I have no idea — maybe I selected specific hinting/antialiasing at some point?).

    It has taken a few tries to get reproducible font rendering for librsvg's tests. Without reproducible rendering, the images that get rendered from the test suite may not match the reference images, depending on the font renderer's configuration and the available fonts.

    Currently librsvg does two things to get reproducible font rendering for the test suite:

    • We use a specific cairo_font_options_t on our PangoContext. These options specify what antialiasing, hinting, and hint metrics to use, so that the environment's or user's configuration does not affect rendering.

    • We create a specific FcConfig and a PangoFontMap for testing, with a single font file that we ship. This will cause any font description, no matter if it is "sans-serif" or whatever, to resolve to


Wednesday
11 January, 2017


Michael Meeks: 2017-01-11 Wednesday.

21:00 UTCmember

face
  • Mail chew, contract work, encouraging partner call. Took H. out to try to draw the moon at night (Astronomy GCSC) - immediate cloud cover: hmm.

face

There were plenty of Tumbleweed snapshots leading up to the holiday season and openSUSE’s rolling release is gliding into 2017 with several new packages on the horizon.

The last snapshot of 2016, 20161226, updated the Linux Kernel to 4.9, which was a good way to end the year. Several packages were updated in the snapshot including Python3-setuptools to version 31.0.0, gnome-online-accounts 3.22.3, NetworkManager 1.4.4 and yast2-network 3.2.17.

NetworkManager changed the order in which IP addresses are configured is now preserved so that primary address is selected correctly.  Yast2-network enabled DHCP_HOSTNAME listbox only when wicked service is used.

The biggest update in the first 2017 snapshot, 20170104, was the several KDE Plasma 5.8.5 packages that were updated. Samba updated to version 4.5.3 and fixed CVE-2016-2123.

Mozilla Thunderbird’s update to version 45.6 fixed a couple security and memory bugs.

The library offering an Application Programming Interface to access secure communication protocols called GnuTLS updated to version 3.5.7, fixed several bugs and set limits on the maximum number of alerts handled.

Also in the snapshot, Wireshark fixed User Interface bugs with an update to version 2.2.3, newbie-friendly text-editor nano updated to 2.7.3 and libvirt-python added new APIs and constants with the update to 2.5.0.

The 20170109 snapshot provided a cleaned up configuration settings for Mesa, so it can be uniform across all architectures except for list of Direct Rendering Infrastructure and Gallium drivers. Btrfsprogs 4.9 clean up was well and offers better handling of file system snapshots. Python3-setuptools updated to 32.3.1, which is fixed regressions and compatibility  issues from previous versions.


Tuesday
10 January, 2017


Michael Meeks: 2017-01-10 Tuesday.

21:00 UTCmember

face
  • Mail chew; contract re-working; Lunch. Weekly commercial team call, filed bugs.

Monday
09 January, 2017


Michael Meeks: 2017-01-09 Monday.

21:00 UTCmember

face
  • One to ones, projections, lunch, chat with Eric. Customer call. Board call. Dinner. Bought a lameish 2nd hand Mac Mini to fix egregious bugs with; thanks to TDF.

Sunday
08 January, 2017


Michael Meeks: 2017-01-08 Sunday.

21:00 UTCmember

face
  • Up lateish; off to NCC, Pizza for lunch; rested thoroughly.

Saturday
07 January, 2017


Michael Meeks: 2017-01-07 Saturday.

21:00 UTCmember

face
  • Up; mended things around the house variously - maintenance everywhere. Lunch, snoozed by the fire. Out to play at Sam & Paul's marriage blessing; on to see Julie's nice new flat. Back together for a meal. Stories, bed.

face

Weblate probably would not exist (or at least would be much harder to manage) without several services that help us to develop, improve and fix bugs in our code base.

Over the time the development world has become very relying on cloud services. As every change this has both sides - you don't have to run the service, but you also don't have control on the service. Personally I'd prefer to use more free software services, on the other side I really love this comfort and I'm lazy to setup things which I can get for free.

The list was written down mostly for showing up how we work and the services are not listed in any particular order. All of the services provide free offerings for free software projects or for limited usage.

GitHub

I guess there is not much to say here, it has become standard place to develop software - it has Git repositories, issue tracker, pull requests and several other features.

Travis CI

Running tests on every commit is something what will make you feel confident that you didn't break anything. Of course you still need to write the tests, but having them run automatically is really great help. Especially great for automatically checking pull requests.

AppVeyor

Continuous integration on Windows - it's still widely used platform with it's quirks, so it's really good idea to test there as well. With AppVeyor you can do that and it works pretty nicely.

Codecov

When running tests it's good to know how much of your code is covered by them. Codecov is one of the best interfaces I've seen for this. They are also able to merge coverage reports from multiple builds and platforms (for example for wlc we have combined coverage for Linux, OSX and Windows coming from Travis CI and AppVeyor builds).

SauceLabs

Unit testing is good, but the frontend testing in browser is also important. We run Selenium tests in several browsers in SauceLabs to verify that we haven't screwed up something from the user interface.

Read the Docs

Documentation is necessary for every project and having it built automatically is nice bonus.

Landscape

Doing code analysis is a way to avoid some problems which are not spot during testing. These can be code paths not covered by test or simply coding style issues. There are several such services, but Landscape is my favorite one right now.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments


Friday
06 January, 2017


Michael Meeks: 2017-01-06 Friday.

21:00 UTCmember

face
  • Mail chew; poked at contracts, priorities, encouraged people to buy good LibreOffice'y things - caught up with Robert.

Thursday
05 January, 2017


Michael Meeks: 2017-01-05 Thursday.

21:00 UTCmember

face
  • Worked through mail. Really thrilled to see the legacy StarOffice file formats being worked on by Laurent Alonso in libstaroffice. Great to see the rather embarrasing gap left by StarOffice's old and unpleasant binary file formats start to get filled.
  • Plugged away at some rather tedious ESC / budget ranking bits to try to build consensus on what strategic things to push to the board. ESC call - good to see the guys, lots of good things going on.

face

I've rounded up the working patches from the public posts and created my own patch files. You can use my updated VMware module compile script to patch it as well. It also does a bit of cleanup. Grab the script and the patch files from here. Once downloaded then make sure they are all in the same directory and you have made the script executable. Follow the rest of the steps below.

1) Directory should look like this:

# ls -al mkvm* *.patch
-rwxr-xr-x 1 cseader users 2965 Jan  4 21:11 mkvmwmods+patch.sh        
-rwxr-xr-x 1 cseader users 1457 Sep 26 15:47 mkvmwmods.sh
-rw-r--r-- 1 cseader users  650 Jan  4 19:16 vmmon-hostif.patch        
-rw-r--r-- 1 cseader users  650 Jan  4 21:21 vmnet-userif.patch
2) Execute with sudo or login as root

# ./mkvmwmods+patch.sh                                                
It will immediately start the cleanup and then extracting the VMware source. If the patch files are in the same Directory as it looks like above then it will patch the source for compiling against Kernel 4.9
                      

3) Now Start VMware Workstation.

Enjoy!

face

Well if your like me and you have been sick of this Error: Failed to get gcc information. for awhile now when installing VMware Workstation on the major Linux distributions out there then you likely will want to automate the process of compiling it correctly and doing the rest of the tasks once your compile is complete.

Download my script here and run it after each time your kernel changes of course.

Let me know how your experience is with this or you would like to see some additions or adjustments.


Wednesday
04 January, 2017


Michael Meeks: 2017-01-04 Wednesday.

21:00 UTCmember

face
  • Mail chewage; customer contract bits. Booked travel for FOSDEM which will be awesome as ever.
  • Measured my Wife's rather tired (original) Galaxy S3 battery as I replaced it; somewhat concerningly it has swelled from 5.7mm to 7.8mm some third wider than it used to be: exciting.
  • Up late filing tax for Julia.

face

You might wonder why there is so high number of phpMyAdmin security announcements this year. This situations has two main reasons and I will comment a bit on those.

First of all we've got quite a lot of attention of people doing security reviews this year. It has all started with Mozilla SOS Fund funded audit. It has discovered few minor issues which were fixed in the 4.6.2 release. However this was really just the beginning of the story and the announcement has attracted quite some attention to us. In upcoming weeks the security@phpmyadmin.net mailbox was full of reports and we really struggled to handle such amount. Handling that amount actually lead to creating more formalized approach to handling them as we clearly were no longer able to deal with them based on email only. Anyway most work here was done by Emanuel Bronshtein, who is really looking at every piece of our code and giving useful tips to harden our code base and infrastructure.

Second thing which got changed is that we release security announcements for security hardening even when there might not be any practical attack possible. Typical example here might be PMASA-2016-61, where using hash_equals is definitely safer, but even if the timing attack would be doable here, the practical result of figuring out admin configured allow/deny rules is usually not critical. Many of the issues also cover quite rare setups (or server misconfigurations, which we've silently fixed in past) like PMASA-2016-54 being possibly caused by server executing shell scripts shipped together with phpMyAdmin.

Overall phpMyAdmin indeed got safer this year. I don't think that there was any bug that would be really critical, on the other side we've made quite a lot of hardenings and we use current best practices when dealing with sensitive data. On the other side, I'm pretty sure our code was not in worse shape than any similarly sized projects with 18 years of history, we just become more visible thanks to security audit and people looked deeper into our code base.

Besides security announcements this all lead to generic hardening of our code and infrastructure, what might be not that visible, but are important as well:

  • All our websites are server by https only
  • All our releases are PGP signed
  • We actively encourage users to verify the downloaded files
  • All new Git tags are PGP signed as well

Filed under: Debian English phpMyAdmin SUSE | 0 comments


Tuesday
03 January, 2017


Michael Meeks: 2017-01-03 Tuesday.

21:00 UTCmember

face
  • Back to work; team calls and structural shuffling. Chewed through masses of E-mail, synched with Miklos. A long series of bitty calls with a new customer.

Monday
02 January, 2017


Michael Meeks: 2017-01-02 Monday.

21:00 UTCmember

face
  • Worked in the morning long enough to discover that most of the team wanted another day off, but didn't file it. Abandoned ship to wander around Wicken Fen - with family jumping on the fen-land. Supposedly the top metre is 3/4's water - but then so am I.
  • Home to work through a task list of small things to do.

face

Say we have a robotwith a USB connection and command documentation. The only thing missing is knowing how to send a command over USB. Let's learn the basic concepts needed for that.

General Bunny catching Pokemon

Installing the Library

We'll use the pyusb Python library. On openSUSE we install it from the main RPM repository:

sudo zypper install python-usb

On other systems we can use the pip tool:

pip install --user pyusb

Navigating USB Concepts

To send a command, we need an Endpoint. To get to the endpoint we need to descend down the hierarchy of

  1. Device
  2. Configuration
  3. Interface
  4. Alternate setting
  5. Endpoint

First we import the library.

#!/usr/bin/env python2

import usb.core

The device is identified with a vendor:product pair included in lsusb output.

Bus 002 Device 043: ID 0694:0005 Lego Group

VENDOR_LEGO = 0x0694
PRODUCT_EV3 = 5
device = usb.core.find(idVendor=VENDOR_LEGO, idProduct=PRODUCT_EV3)

A Device may have multiple Configurations, and only one can be active at a time. Most devices have only one. Supporting multiple Configurations is reportedly useful for offering more/less features when more/less power is available. EV3 has only one configuration.

configuration = device.get_active_configuration()

A physical Device may have multiple Interfaces active at a time. A typical example is a scanner-printer combo. An Interface may have multiple Alternate Settings. They are kind of like Configurations, but easier to switch. I don't quite understand this, but they say that if you need Isochronous Endpoints (read: audio or video), you must go to a non-primary Alternate Setting. Anyway, EV3 has only one Interface with one Setting.

INTERFACE_EV3 = 0
SETTING_EV3 = 0
interface = configuration[(INTERFACE_EV3, SETTING_EV3)]

An Interface will typically have multiple Endpoints. The Endpoint 0 is reserved for control functions by the USB standard so we need to use Endpoint 1 here.

The standard distinguishes between input and output endpoints, as well as four transfer types, differing in latency and reliability. The nice thing is that the Python library nicely allows to abstract all that away (unlike cough Ruby cough) and we simply say to write to a non-control Endpoint.

ENDPOINT_EV3 = 1
endpoint = interface[ENDPOINT_EV3]

# make the robot beep
command = '\x0F\x00\x01\x00\x80\x00\x00\x94\x01\x81\x02\x82\xE8\x03\x82\xE8\x03'
endpoint.write(command)

Other than Robots?

Robots are great fun but unfortunately they do not come bundled with every computer. Do you know of a device that we could use for demonstration purposes? Everyone has a USB keyboard and mouse but I guess the OS will claim them for input and not let you play.

What Next

The Full Script


Sunday
01 January, 2017


Michael Meeks: 2017-01-01 Sunday.

21:00 UTCmember

face
  • Off to Isleham Baptist in the morning, N. to play at Emmas, home for a use-ups lunch; Bobbie, Tyson & Emma over for fixing 3D Goggles, silly games & tea.

Saturday
31 December, 2016


face

I was nominated to run for the openSUSE Board, and finally decided to run ;-)

 

I use openSUSE since years (actually it was still „SuSE Linux“ with lowercase „u“ back then), started annoying people in bugzilla, err, started betatesting in the 9.2 beta phase. Since then, I reported more than 1200 bugs. Later, OBS ruined my bugzilla statistics by introducing the option to send a SR ;-)

More recently, I helped in fighting the wiki spam, which also means I‘m admin on the english wiki since then, and had some fun[tm] with the current server admin. I‘m one of the founding members of the Heroes team (thanks to Sarah for getting the right people together at oSC16!) Currently, I work on the base server setup (using salt) for our new infrastructure and updating the wiki to an up-to-date MediaWiki version.

You can find me on several mailinglists and on IRC, and of course I still scare people in bugzilla. I‘m also a regular visitor and speaker at the openSUSE Conference, and visit other conferences as time permits.

Besides openSUSE, I work on AppArmor and PostfixAdmin – both upstream and as packager. Also, I‘m admin on several webservers (all running with Leap).

My day job has nothing to do with computers. I produce something you can drink that is named after a software we ship in openSUSE ;-)

Oh, and I collect funny quotes from various mailinglists, IRC, bugzilla etc. that then end up as random signatures under my mails, so be careful what you write ;-)

 

Issues I can see

  • You probably know „DRY“, so – see the next paragraph

 

Aims/Goals

  • speed! We have too many issues hanging around for too long, and that‘s annoying for people who suffer from them. Especially small things should (and can!) be solved quickly.

  • clear responsibilities! Part of the speed problem is that it‘s sometimes hard to find out who can fix something, and hunting down people takes time.

  • don‘t talk (too much) – do it! Sometimes we need to discuss things, but often just doing them works best. Obviously I can‘t do everything alone, so I want to encourage people to help whereever they can. „I don‘t have knownledge how to do this“ doesn‘t count – for example, updating a wiki page or reporting a bug isn‘t hard ;-) and typically people really start to report bugs once they understand that this gives them the right to complain (quoting Pascal Bleser: „Always file a bug: if it‘s not in Bugzilla, then it‘s not there“)

  • longer days! Maybe I should move to Bajor – I heard they have 26 hour days there, which would solve some of my time problems ;-))

 

Why you should vote for me?

  • I tend to kick people to ensure they work faster and fix things. This is your chance to kick me!

  • Help me to find out if I can get the thing in the (non-random) signature of this blog post done!

 

Things I


Michael Meeks: 2016-12-31 Saturday.

21:00 UTCmember

face
  • Tim, Suzie & Simon over to visit - really lovely to see them, meet Simon, talk Graphite & type-setting of texts - fun. Georgina, Adrian & Isabelle over for lunch as well. Slugged happily by the fire and played with Simon for the afternoon, Julie back for tea.

Friday
30 December, 2016


Michael Meeks: 2016-12-30 Friday.

21:00 UTCmember

face
  • Worked through a backlog of tasks; tried to use the iDealing UX again - simply an appalling experience - perhaps if you pay lots of money for ultra-live pricing data it is more usable; but in the absence of that - to even get an approximate price for the last week or so - you have to use google finance / random queries - leading to a simply hideous experience; urk.

face

Rocket ScienceRocket Science
The calm days between christmas and new year are best celebrated with your family (of choice), so I went to Hamburg where the 33rd edition of the Chaos Computer Congress opened the door to 12.000 hackers, civil rights activists, makers and people interested in privacy and computer security. The motto of this congress is “works for me” which is meant as a critical nudge towards developers who stop after technology works for them, while it should work for everyone. A demand for a change in attitude.

33C3's ballroom33C3’s ballroom

The congress is a huge gathering of people to share information, hack, talk and party, and the past days have been a blast. This congress strikes an excellent balance between high quality talks, interesting hacks and electronics and a laid back atmosphere, all almost around the clock. (Well, the official track stops around 2 a.m., but continues around half past eleven in the morning.) The schedule is really relaxed, which makes it possibly to party at night, and interrupt dancing for a quick presentation about colonizing intergalactic space — done by domain experts.

The conference also has a large unconference part, hacking spaces, and lounge areas, meaning that the setup is somewhere in between a technology conference, a large hack-fest and a techno party. Everything is filled to the brim with electronics and decorated nicely, and after a few days, the outside world simply starts to fade and “congress” becomes the new reality.

No Love for the U.S. Gov

I’ve attended a bunch of sessions on civil rights and cyber warfare, as well as more technical things. One presentation that touched me in particular was the story of Lauri Love, who is accused of stealing data from agencies including Federal Reserve, Nasa and FBI. This talk was presented by a civil rights activist from the Courage foundation, and two hackers from Anonymous and Lulzsec. While Love is a UK citizen, the US is demanding extradition from the UK so they can prosecute him under US law (which is much stricter than the UK’s). This would create a precedent making it much easier for the US to essentially be able to prosecute citizens anywhere under US law.

What kind of technoparty^W congres is this?What kind of technoparty^W congres is this?
This, combined with the US jail system poses a serious threat to Love. He wouldn’t be the first person to commit suicide under the pressure put on him by the US government agencies, who really seem to be playing hardball here. (Chelsea Manning, the whistleblower behind the videos of the baghdad airstrikes, in which US airforce killed innocent citizens carelessly, among others) who suffered from mental health issues, was put into solitary confinement, instead of receiving health care. Against that background, the UK would send one of their own citizens into a jail that doesn’t even respect basic human rights. On particularly touching moment was when the brother of Aaron Swartz took the microphone and appealed to the people


Thursday
29 December, 2016


Michael Meeks: 2016-12-29 Thursday.

21:00 UTCmember

face
  • Played games in the morning; lunch, hung J's new curtains, Julie arrived back; bid a sad 'bye to the family. Nursed sick babes - watched The Truman Show - read stories, slugged by the fire, triaged mail briefly in the evening.

Wednesday
28 December, 2016


Michael Meeks: 2016-12-28 Wednesday.

21:00 UTCmember

face
  • H. and N. ill; out to Angelsey Abbey with the rest of the family - enjoyed the Water-mill in action (some 10kW of grinding power). Georgina deferred due to illness; enjoyed a buffet lunch together.
  • Snacked on it through the evening; watched The Princess Bride; stayed up with H. playing Family Row - and chatting happily.

face

Hi! I‘m Sarah Julia Kriesch, 29 years old, educated as a Computer Science Expert for System Integration, and currently studying Computer Science at the TH Nürnberg.

 

Introduction and Biography

I am a Student at the TH Nürnberg, Student Officer for Computer Science (Fachschaft Informatik) and a Working Student (Admin/ DevOps) at ownCloud. I changed from working life to student life this year. I received the scholarship „Aufstiegsstipendium“ (translated „upgrading scholarship“) for students with work experience by the BMBF.

I have got 4 years of work experience as a Linux System Administrator in the Core System Administration (Monitoring) at 1&1 Internet AG/ United Internet and as a (Managing) Linux Systems Engineer for MRM Systems (SaaS) at BrandMaker. MRM Systems are systems for project management in marketing (Marketing Ressource Management Systems).

I used SLES/ openSUSE during my German education of information technology for the first time in 2009. In the company I learned installations with YaST. I wanted to know more, which was the reason for going to conferences and expos. I tried to educate myself (with community support and vocational school) until the end of my 2nd year. oSC11 was the time stamp for meeting the openSUSE Community.  Marco Michna wanted to become my Mentor in System Administration and gave me private lessons until his death. I got a scholarship for further education (a free Linux training) by Heinlein (LPIC-1). Both were a good base for starting in the job after the vocational training act.

I wasn‘t allowed to contribute to openSUSE during my last year of education, because my education company didn‘t want to see that. They filtered Google after all contributions in forums and communities. That‘s the reason why I am using the anonymous nick name „AdaLovelace“ at openSUSE. I had to wait for joining openSUSE again until my first job where I worked together with Contributors/ Members of Debian, FreeBSD and Fedora.

I started with German translations at openSUSE with half a year of work experience. Most of you know me from oSCs (since 2011). I was Member of the Video Team, the Registration Desk and contributed as a Speaker. Since 2013 I am wiki maintainer in the German wiki and admin there. Since 2014 I am an active Advocate in Germany. I give yearly presentations, organize booths and take part in different Open Source Events. As a GUUG Member (German Unix User Group) asked for a sponsorship for oSC16. I hold my first (English) presentation about Performance Monitoring then.

This year I have joined the Heroes Team and the Release Management Team. I founded the Heroes Team with my friends during the oSC16 because of the spam in the wiki. I became the Coordinator for this project. I am Translation Coordinator now, too. I was responsible for


Monday
26 December, 2016


face

My private email domains are hosted on a linux server where I have shell access (but not as root) which processes them with procmail, stores them locally and finally forwards them all to a professionally hosted email server with IMAP access and all that blinky stuff.
The setup is slightly convulted (aka "historically grown") but works well for me.

But the last days have been quiet on the email front. Not even the notorious spammers spamming my message-ids (how intelligent!) have apparently be trying to contact me. Now that's suspicious, so I decided to look into that.

A quick testmail from my gmail account did not seem to come through. Now the old test via telnet to port 25... had to look up the SMTP protocol, it's a long time ago I had to resort to this. First try: greylisting... come back later. Second try:

250 Ok: queued as F117E148DE4
Check the mails on the server: did not get through.

Now a few more words on the setup: as I wrote, all mail is forwarded to that professionally hosted IMAP server, where I read it usually with Thunderbird or, if things get bad, with the web frontend.
But since all emails are also stored on the server with shell access, I get them from there from time to time via imap-over-ssh, using fetchmail and the mailsync tool.

BTW, the fetchmail setup for such a thing is:
poll myacc via shellservername.tld with proto imap:
    plugin "ssh -C %h bin/imapd" auth ssh;
    user seife there is seife here options keep stripcr
    folders Mail/inbox Mail/s3e-spam Mail/thirdfolder
    mda "/usr/bin/procmail -f %F -d %T"
So while trying to check mail, I'm regularly running:
fetchmail && mailsync myacc
(first fetchmail, since it passes the mails to procmail which does the same folder-sorting as was done on the mail server already and is much faster than mailsync, which comes second to do the synchronization stuff: delete mails on the server that have been deleted locally etc.)
All looks normal, apart from no new mails arriving.
Until suddenly I noticed, that mailsync was synchronizing a folder named "spamassassin.lock". WTF?

 Investigating... On the server, there really is an (emtpy) mailbox named "Mail/spamassassin.lock".
Next place to look for is .procmailrc, and there it is: a rule like:

:0fw: spamassassin.lock
* < 1048576
| $HOME/perl/bin/spamassassin
And since everything in procmail apparently per default is relative to $MAILDIR, the lockfile was placed there. Probably a mailsync process came along, exactly at the moment the lockfile was existing and persisted it, and after that, no mail ever went past this point.

Solution was easy: remove the lockfile, make sure it does not get re-synchronized with next mailsync run and reconfigure procmail to use $HOME/spamassassin.lock instead. Now the silent times are over, spam is piling up again.

Saturday
24 December, 2016


face

Today I reinstalled and wiped my old moto g (falcon) phone.
After all was done, it finally did no longer boot anywhere but into recovery -- no matter which recovery I flashed. It was still possible to boot into fastboot mode (Volume down + Power button), then select "normal system boot", but that's certainly not a good user experience on every power-on.
Additionally, the "charge battery when powered off" image was no longer working: plugging in power would also boot into recovery.

Some googling finally lead me to a xda-developers forum post which has the solution: there is a raw partition in the flash, which apparently stores the default boot option for the boot loader, just wiping this partition will restore the default boot order.

So when booted into recovery (must have adb enabled), just run

adb shell \ 
  dd if=/dev/zero \
  of=/dev/block/platform/msm_sdcc.1/by-name/misc
from your computer (adb installed and USB cable connected, of course).
This should fix booting (it did for me).
 

Thursday
22 December, 2016


face

It’s Christmas time and since (open)SUSE users have been nice, the YaST team brings some gifts for them. This is the result of the last development sprint of 2016.

As you may have noticed, in the latest sprints we have been focusing more and more in making SUSE CASP possible. That’s even more obvious in this last sprint of the year. For those that have not been following this blog recently, it’s probably worth to remember that SUSE CASP will be a Kubernetes based Container As a Service Platform.

But our daily work goes beyond CASP, so let’s take a look to all the highlights.

More improvements in the management of DHCLIENT_SET_HOSTNAME

In the previous report we presented the changes introduced in yast2-network to make the configuration parameter DHCLIENT_SET_HOSTNAME configurable in a per-interface basis.

One of the great things about working in an agile an iterative way, presenting and evaluating the result every three weeks, is that it allows us to detect room for improvements in our work. In this case we noticed some discrepancy in the expectations of Linuxrc and yast2-network and also some room for improvement in the code documentation and in the help texts.

Thus, we used this sprint to refine the work done in the previous one and tackle those problems down.

Improved error message

Ensure installation of needed packages

Another example of iterative development. We already presented in the report of the 26th development sprint a new mechanism to detect when the user had deselected during installation some package that was previously pre-selected by YaST in order to install the bootloader. Since the new functionality proved to work nicely, we decided to extend it to cover other parts of the system beyond the bootloader.

The software proposal now contains an error message including a list of missing packages or patterns, in case the user deselects some needed items.

Warning about missing packages

After clicking the Install button the installation is blocked, the user must resolve the problem either by selecting the packages back or by adjusting the respective YaST configuration (e.g. do not install any bootloader and disable the firewall).

Blocking an incomplete installation

Rethinking the expert partitioner

May we insist one more time on the topic of using Scrum to organize our work in an iterative way? 😉 As our usual readers should already know, we structure the work into minimal units that produce a valuable outcome called PBIs in Scrum jargon. That valuable outcome doesn’t always have to be a piece of software, an implemented feature or a fixed bug. Sometimes a document adds value to YaST, specially if it can be used as base to collaborate with people outside the team.

Our readers also know that we are putting a lot of effort in rewriting the whole storage layer of YaST. That also implies rewriting the most powerful tool known by humanity to define partitions, volumes, RAIDs and similar stuff – the YaST expert partitioner.

It would be great if we could use the opportunity to make it

Older blog entries ->