Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Saturday
30 January, 2016


Michael Meeks: 2016-01-30 Saturday.

21:00 UTCmember

face
  • Up; mail chew, breakfast with Kendy; to the venue ... caught up with lots of old friends; exciting times. Announced our exciting partnership with Kolab - really looking forward to working closely together.
  • Out for LibreOffice dinner, not improved by transient dancer - but improved by good company. Back to the hotel late - up until 4:30 am or so working on my talk.

Friday
29 January, 2016


Michael Meeks: 2016-01-29 Friday.

21:00 UTCmember

face
  • Up; off to the hack-fest; lots of hack-fest'y work. Spent some considerable time successfully hacking at per-user memory usage for tilebench - with some considerable success. Discussed some awesome testing work Markus is doing - encouraging.
  • Out for a fine meal with Georg, Aaron & the Kolab guys. Checked in to the hotel, wandered the Delirium crush, back for some more slide'y action.

Jakub Steiner: Rio

12:49 UTCmember

face

Rio UX Design Hackfest from jimmac on Vimeo.

I was really pleased to see Endless, the little company with big plans, initiate a GNOME Design hackfest in Rio.

The ground team in Rio arranged a visit to two locations where we met with the users that Endless is targeting. While not strictly a user testing session, it helped to better understand the context of their product and get a glimpse of the lives in Rocinha, one of the Rio famous favelas or a more remote rural Magé. Probably wouldn’t have a chance to visit Brazil that way.

Points of diversion

During the workshop at the Endless offices we went through many areas we identified as being problematic in both the stock GNOME and Endless OS and tried to identify if we could converge on and cooperate on a common solution. Currently Endless isn’t using the stock GNOME 3 for their devices. We aren’t focusing as much on the shell now, as there is a ton of work to be done in the app space, but there are a few areas in the shell we could revisit.

GNOME could do a little better in terms of discoverability. We investigated the role of the app picker versus the window switcher in the overview and being able to enter the overview on boot. Some design choices have been explained and our solution was reconsidered to be a good way forward for Endless. Unified system menu, window controls, notifications, lock screen/screen shield have been analyzed.

Endless demoed how the GNOME app-provided system search has been used to great effect on their mostly offline devices. Think “offline google”.

DSC02567 DSC02589 DSC02616

Another noteworthy detail was the use of CRT screens. The new mini devices sport a cinch connection to old PAL/NTSC CRT TVs. Such small resolutions and poor quality brings more constraints on the design to keep things legible. This also has had a nice effect in that Endless has investigated some responsive layout solutions for gtk+ they demoed.

I also presented GNOME design team’s workflow, and the free software toolchain we use. Did a little demo of Inkscape for icon design and wireframing and Blender motion design.

Last but not least, I’d like to thank the GNOME Foundation for making it possible for me to fly to Rio.

Rio Hackfest Photos


face
Hi ownCloud, KDE and openSUSE peeps!

We will soon be traveling to Brasil to visit family in various places (from Amazonia to Rio Grande do Sul). We'll land in Sao Paulo and stay there between February 9 and 11 - if you're a KDE, ownCloud or openSUSE contributor in that area and want me to try and bring some swag like flyers, stickers and posters for events, we could meet! Perhaps there's time for a lunch or dinner at some point.

Ping me, either here below in the comments or by sending me an email.

Videos from our last trips to Brasil:




face

Some time back I wrote a patch to KIWI that allows running openSUSE live entirely from RAM(tmpfs).

How to use it?
Pass “toram” parameter at the boot menu. Try it on Li-f-e.

Benefits:
Running the OS from RAM make it lot more responsive than running from DVD or USB device, for example it is most useful for running a demo computer where many users try lot of applications installed in the live system. USB or the DVD can be ejected once the OS is loaded. It can be used to load OS to RAM directly from iso in a virtual machine as well.

Caveat:
Needs enough RAM to copy the entire iso to RAM and then some spare to operate the OS, Li-f-e for instance would need minimum 5G RAM available. It also takes a bit longer to boot as the entire image is copied to RAM.


Thursday
28 January, 2016


Michael Meeks: 2016-01-28 Thursday.

21:00 UTCmember

face
  • Up; breakfast with Norbert, quested through the town un-successfully for night/tooth-guards - pharmacies don't sell them, nor the sports-shop I tried; interesting - its enough to make you grind your teeth.
  • Arrived; poked with Lionel at his nasty event ordering bug inconclusively; met lots of fun GNOME guys, lots of LibreOffice hackers. Meeting with Kendy & Miklos, worked through some mail. Out for a dinner nearby in the evening, and back - waffles.

face

Veamos un tutorial muy simple de cómo copiar unas llaves SSH que hemos generado en un equipo a otro.

globo_candado

En este tutorial no voy a hablar en detalle de qué son las llaves SSH, para eso está la wikipedia, os remito allí y a los enlaces para saber más al respecto:

A groso modo podemos decir que las claves SSH son “llaves” que sirven para darnos acceso remoto a otros equipos. Ya sean PC’s, servidores, o (como en el caso que me ocupa) acceso a repositorios de git de otra gente o comunidades. Las claves SSH por tanto sirven para identificar tu equipo y así tener acceso a recursos remotos.

Las claves (en plural, ya que son un par de claves las que se generan, una pública que puedes compartir y una privada, que debes guardar con recelo) las generas en tu equipo, y después compartes con el administrador de los servicios remotos a los que quieres acceder tu clave pública, así él las incluirá entre las claves confiables y cada vez que quieras acceder al servicio remoto tendrás acceso.

Una vez generadas, en tu equipo con GNU/Linux (en otros sistemas operativos no tengo ni idea) se crea un directorio oculto en el /home de tu usuario llamado .ssh y en el que estarán almacenadas esas claves. La duda me surgió, porque yo generé las claves para tener acceso a un repositorio git remoto y me pidieron mi clave pública para poder tener derechos para acceder a ese repositorio, pero ¿qué pasa si quiero trabajar en ese mismo repositorio desde otra máquina?

No es necesario crear otro par de claves y volver a enviarles mi nueva clave pública. Después de un rato de búsqueda, ví que era sencillo. Lo dejo por aqui apuntado en el blog, por si a ti te puede servir. Yo encontré la solución en este enlace:

Ya había probado a simplemente copiar dicho directorio .ssh de una máquina a la máquina donde quería migrar mis claves, pero al intentar acceder al repositorio con git, me daba un error, algo así como diciéndome que mis claves eran demasiado públicas como para confiar en ellas, todo eso y más pero en inglés.

Por tanto algo más tendría que hacer. Y buscando es cuando dí con el enlace en cuestión. En él explican que efectivamente vale con copiar y pegar ese directorio, pero ya que esos archivos contienen datos importantes deberían poder leidos por el usuario, pero no accesibles para leer, escribir, o ejecutar por otros usuarios, por lo que ssh ignora esas claves si existe ese problema de seguridad.

Por tanto era un tema de permisos de esos archivos. Así que accedes a tu /home/tu_usuario/.ssh y en esa ruta ejecutas:

chmod 600 id_rsa

Con esto


face
After covering openSUSE and KDE booths at SCALE in my previous blog, let's talk ownCloud. Note that, despite the awesomeness of this blog post, our biggest news right now is probably the announcement that ownCloud has an estimated 8 million users!

Our booth

So SCALE14x had an ownCloud booth staffed by the Dynamic Duo Matt McGraw and yours truly. We had the usual flyers, posters and stickers but Matt had also brought a big monitor and Mountain Dew. In case you don't know the drink, it is important to know that it is by far not as natural as the name suggests.

The Story of the Mountain Dew

The plan with the drinks was to hand them out to people who would mention Chris' hair (the Linux Action Show host) - Matt had told people to come by our booth and ask about it to get a drink. Sadly, nobody did show, either due to fear of Mountain Dew (my bet) or there were few or no Linux Action Show viewers at SCALE14x... The idea is brilliant, though, and I think we should try again next year. Perhaps with a drink that isn't fluorescent green, or make sure Chris mentions it in the Linux Action Show itself?

Latest prototype of the ownCloud WD Pi Drive

(and seriously, I had a few Moutain Dew's, nothing wrong with carbonated sugar drinks if you ask me)

Western Digital Pi Drive Kits

The monitor had another purpose: demo ownCloud, of course. That turned out real cool: upon arrival at my hotel, I had received a package with the latest prototypes of our Pi Drive kits send by Western Digital! The casings have a cool ownCloud logo on them and there was a custom, 3d-printed cover to close the thing off on the top, looking real slick with ownCloud logo cut-out.

Anyhow, we assembled one Pi kit, put ownCloud on it (duh) and ran it from the screen so we could demo ownCloud. The other kit we kept in half-assembled state for people to check out. We had a *lot* of people who were interested, we certainly sold many of the existing Pi Drive kits (you can already get them, without ownCloud though, from the WD store) while many others will wait for us to release the PiDrive with ownCloud. Maybe I'm very optimistic here but the excitement was so great I have the feeling we'll sell those 500 in no time.

On a related note, the Western Digital team working on the Pi Drive/ownCloud project came by the booth for a chat, too. It was great to meet them and shake hands in real life!

Matt explains what this 'ownCloud' thing is

Booth visitors

So we talked to people at the booth. I must've talked to about 50.000 people, my throat is soar (and you all know I have plenty experience talking as I usually can't stop - so this is saying something). Some highlights from

face
Last weekend was SCALE and I had a lot of fun. Thought a report on the KDE/openSUSE presence would be good!


An impression from the trip - as in, Oslo->Los Angelos.


The event started with talks and I even managed to join the keynote by Corey Doctorow before heading to the booth!

openSUSE

If you didn't know it yet, now you do: both the KDE and GNOME booth are organized by openSUSE, and more precisely Booth Master Drew Adams. His energy makes, I think, openSUSE the most active community booth at FOSDEM with about a dozen volunteers (!!!). He keeps bringing in new people, amazing really. The corner booth worked out great though they wanted to try and move the openSUSE booth to be next to the SUSE one (which was sandwiched between Mageia and the FSF). Thinking about it now - it would have made SUSE pale in comparison...

Such a great presence at an event has a real impact in many ways, from introducing people to openSUSE and giving users a chance to chat about it to also showing people the Geeko matters and what it is up to. Plus, it's a great time for the team, too. So, awesome many points for Drew, really. Next time you see him: give him a hug!


The openSUSE Booth where visitors were thought about the Ways of the Geeko: Tumbling and Leaping!

KDE

I asked for volunteers in a blog some weeks ago and the good news is that people stepped up! Scarlett, who's working on becoming a Debian maintainer, as well as her Debian sponsor Diane, both stepped up to help out. Backbone of the booth this year was Barrington Daltrey who hasn't been representing KDE at SCALE for some years but decided to get back in the game again. A massive thanks to all three volunteers! I'm hoping that next year, Bert and Linda Yerke (who couldn't make it this year) are able to join again, we can have a real KDE party then! Especially as the Yerkes created some great swag last year (the awesome Konqi stickers!) and I have high expectations for what they might bring in 2017.

In any case, with these volunteers, the booth was staffed and lots of people could get their questions answered and had a place to leave their praise and thanks.


GNOME and KDE - brothers in arms!


Scarlett took a pic with me ;-)

Others

Of course, there was also the GNOME booth, well staffed and with demo devices. Walking over the rest of the exhibition hall, I spotted other distro's and projects. Elementary looked nice (their icons seem their biggest asset, seeing how they were promoted) and I talked to people at the Ubuntu booth. Their booth had a big Dell banner and two Dell employees to talk to about the Dell Developer Edition laptops. A great project and the team is doing an amazing job! Soon, the new Dell XPS devices will become

face

I use the Open Build Service to work on openSUSE packages. There is a useful tutorial HERE.

Important resources:

  1. post-build-checks source code
  2. Spec file guidelines

And here is a summary of 'osc' commands I use the most:

alias oosc='osc -A https://api.opensuse.org'


Assuming you will be using the openSUSE Build Service, you will need to include the -A option on all the commands shown below. If you set up this alias, you can save a lot of typing.

osc search PKG


Search for a package. You can also use http://software.opensuse.org/ and zypper search PKG is also helpful.

osc meta pkg PRJ PKG -e


If you are project maintainer of PRJ, you can create a package directly using this command, which will throw you into an editor and expect you to set up the package's META file.

osc bco PRJ PKG

osc branch -c PRJ PKG


If you are not a project maintainer of PRJ, you can still work on PKG by branching it to your home project. Since you typically will want to checkout immediately after branching, 'bco' is a handy abbreviation.

osc ar


Add new files, remove disappeared files -- forces the "repository" version into line with the working directory.

osc build REPOSITORY ARCH


Build the package locally -- typically I do this to make sure the package builds before committing it to the server, where it will build again. The REPOSITORY and ARCH can be chosen from the list produced by osc repos

osc chroot REPOSITORY ARCH


Builds take place in a chroot environment, and sometimes they fail mysteriously. This command gives you access to that chroot environment so you can debug. In more recent openSUSEs the directory to go to is ~/rpmbuild/BUILD/

osc vc


After making your changes, edit the changes file. For each release you need to have an entry. Do not edit the changes file yourself: instead, use this command to maintain the changes file "automagically".

osc ci


Commit your changes to the server. Other SVN-like subcommands (like update, status, diff) also work as expected.

osc results


Check what the server is doing. Typically a build will be triggered by your commit. This command lets you see the status.

osc sr


'sr' is short for submitrequest -- this submits your changes to the PROJECT for review and, hopefully, acceptance by the project maintainers. If you're curious who those are, you can run osc maintainer (or osc bugowner)

osc rq list


'rq' is short for request -- and request list $PRJ $PKG lists all open requests ("SRs") for the given project and package. For example, if the package python-execnet was submitted to openSUSE:Factory from the devel:languages:python project, the following command would find the request:

$ oosc rq list devel:languages:python python-execnet
356494 State:review By:factory-auto When:2016-01-28T12:01:16
submit: devel:languages:python/python-execnet@3 -> openSUSE:Factory
Review by Group is accepted: legal-auto(licensedigger)
Review by Group is accepted: factory-auto(factory-auto)
Review by Group is new: factory-staging


face

plasmacrashWer nach einem Update von openSUSE vor einem schwarzen Desktop steht und evtl. mit Fehlern wie „Plasma crashed“ konfrontiert wird, muss nicht verzagen. Das Problem liegt im NVIDIA Treiber und lässt sich sehr leicht lösen, indem man selbigen neu installiert.

Update vom 04.02.2016:
Mit dem Tool DKMS entfällt die ständige Neu-Installation. Dazu einfach vor der (erneuten) Installation DKMS installieren:

sudo zypper in dkms

Danach wird an „NVIDIA-Linux-x86_64-352.63.run“ einfach noch das Kommando -dkms angehängt. Von nun an wird nach einem Kernel-Update der Treiber automatisch neu gebaut.

Einfach dieser Anleitung folgen:

# Mittels ALT + F5 in die Konsole wechseln und in den Ordner mit der 
# ./NVIDIA-Linux-x86_64-352.63.run wechseln

su
cd /home/USERNAME/Downloads/
sh ./NVIDIA-Linux-x86_64-352.63.run -uninstall

# Der Anleitung folgen... bei der Frage nach dem Backup das Backup 
# einspielen (wiederherstellen) lassen.

# Nun ist wieder der Standard-Treiber eingebunden und ein Start zum 
# Desktop sollte möglich sein.
# Also einen Neustart durchführen lassen...
 
reboot -h now

# Nach dem (hoffentlich erfolgreichen Neustart) wieder vom Desktop 
# abmelden (nicht herunterfahren) und mittels ALT + F5 erneut in die 
# Konsole wechseln.

# Jetzt den Treiber erneut installieren:

su
cd /home/USERNAME/Downloads/
sh ./NVIDIA-Linux-x86_64-352.63.run
# ANMERKUNG: evtl. mit -dkms (siehe oben). Dann lautet die Zeile:
# sh ./NVIDIA-Linux-x86_64-352.63.run -dkms

# Den Anweisungen folgen und ein Backup anlegen lassen 
# (geschieht automatisch). Außerdem die 32bit Libs installieren lassen.

# Der NVIDIA Treiber ist nun wieder eingebunden. Jetzt sollte alles 
# wieder funktionieren. 
# Also noch einen Neustart durchführen lassen...
 
reboot -h now

Vielleicht gibt es ja in den unendlichen Weiten des Internets ein Script, was nach einem (Kernel-)Update das ganze etwas vereinfacht…

ajax loader

face

Tumbleweed-black-greenLast week’s updates to Tumbleweed brought several new packages to openSUSE’s rolling release like Kmail 5, KDE Framework 5.18.0 and updates to Perl and YaST.

This week’s snapshot has KDE Applications 15.12.1, which contains only bugfixes and translation updates, and the virtual globe and world atlas Marble updated to from 15.08.3 to the 15.12.1 version.

Libre Office updated to 5.1.0.2. Perl Image ExifTool’s update to 10.10 provided several visual equipment updates and the notes related to the update provided some criticism of programmers from a camera manufacturer.

Auto YaST updated to 3.1.113 fixing the warning message about the ‘init’ section not being processed. GNOME Bluetooth updates to 3.18.2 and Mesa to 11.1.1.

Perl XML XPath updated to 1.24 and Snapper updated to 0.2.10, which added conditional compilation of installation-helper.


face

About Git: https://git-scm.com/book/en/v2/Getting-Started-Git-Basics

This post goes over the basics of how to manage a local git repository.

First we need to setup the local git repository, to do that we create a directory and then initialize git inside of it.

user@linux-ryzk:~/> cd ~/Documents/demo/git-tutorial/
user@linux-ryzk:~/Documents/demo/git-tutorial> git init
        Initialized empty Git repository in /home/user/Documents/demo/git-tutorial/.git/

Now lets check to make sure everything is working as expected, we should see that we are on branch master, with only the initial commit and no files to change.

user@linux-ryzk:~/Documents/demo/git-tutorial> git status
    On branch master

    Initial commit

    nothing to commit (create/copy files and use "git add" to track)

To demonstrate the power of VCS, we can mess with some basic text files. Lets create a file called example.txt and put some text inside of it:

user@linux-ryzk:~/Documents/demo/git-tutorial> cat example.txt
    This is some sample data

Now if we check git status we should see that git has noticed the creation of the file.

user@linux-ryzk:~/Documents/demo/git-tutorial> git status
    On branch master

    Initial commit

    Untracked files:
    (use "git add <file>..." to include in what will be committed)

            example.txt

    nothing added to commit but untracked files present (use "git add" to track)

To get git to start tracking a file you need to use the git add command like below. Then check status again to make sure the file is now being tracked.

user@linux-ryzk:~/Documents/demo/git-tutorial> git add example.txt
user@linux-ryzk:~/Documents/demo/git-tutorial> git status
user@linux-ryzk:~/Documents/demo/git-tutorial> git status
    On branch master

    Initial commit

    Changes to be committed:
    (use "git rm --cached <file>..." to unstage)

            new file:   example.txt

Now we can try to commit the file to the git repo, files that get commited will have their changes tracked. Note that if you do not commit your changes the repo will not have a snapshot of the changes. Using the add command is not enough!

user@linux-ryzk:~/Documents/demo/git-tutorial> git commit -m "This was my first commit!"

        *** Please tell me who you are.

        Run

        git config --global user.email "you@example.com"
        git config --global user.name "Your Name"

        to set your account's default identity.
        Omit --global to set the identity only in this repository.

        fatal: unable to auto-detect email address (got 'user@linux-ryzk.(none)')

Note that git complains we didnt tell it who we are. This is because git tracks who made a certain commit, which will prove to be useful when we use the git blame utility to figure out who made certain changes!

So lets setup our account and then try to commit again…

user@linux-ryzk:~/Documents/demo/git-tutorial> git config --global user.email "ushamim@linux.com"
user@linux-ryzk:~/Documents/demo/git-tutorial> git config --global user.name "Uzair Shamim"
user@linux-ryzk:~/Documents/demo/git-tutorial> git commit -m "This was my first commit!"
    [master (root-commit) 7879099] This 

Wednesday
27 January, 2016


Michael Meeks: 2016-01-27 Wednesday.

21:00 UTCmember

face
  • Into Cambridge, heading eventually for the Eurostar this evening - construction workers dug up an un-exploded bomb there; got to the office; paperwork. Went over financials with Tracie.
  • Trains to Brussels variously; to the Astrid - synched with JanI, good to catch-up with Markus at length; others arrived later for beers, to my room; finished VCL slides very late; bed.

face

Deixo neste post o link de como utilizar o SDK oficial da Intel Realsense, esta camera 3d  é a mais alta tecnologia da Intel no que tange hardware e software para computação de percepção, o que torna possível recursos presente no filme Minority Report em nossos computadores. Resumidamente, este produto permite que o usuário interaja com seus dispositivos de forma que visualizamos em filmes. As possibilidades são diversas utilizando os seus recursos podemos desenvolver aplicativos de reconhecimento facial 2D/3D, detectar gestos e até criar aplicativos de reconhecimento de voz.

Para maiores informações, assistam o video abaixo e clique AQUI!

real



face

banner

O openCertiface baseado na nuvem Microsoft Azure, é a versão de código aberto do serviço de biometria facial em nuvem CERTIFACE. Esta iniciativa foi somente possível graças á visão diferenciada do Grupo Honda (sócios e investidores da empresa OITI TECHNOLOGIES). Pois além de apoiar todo trabalho, carregam o espírito colaborativo, com isto trazendo para a empresa o objetivo de utilizar a tecnologia a serviço da sociedade. Sendo assim, agradeço em nome de toda comunidade de software livre por permitir esta contribuição criada para proteger as pessoas do bem.

Nesta edição da Campus Party 2016 no palco Inovação (Desenvolvimento) dia 29 de janeiro as 17h30,  estarei lançando o projeto, com um palestra onde mencionarei como implementar o projeto a partir do código fonte, além de exemplos em C, PHP, Java e Bash.

Ja disponibilizei o projeto no GIT https://github.com/cabelo/opencertiface para os interessados. Este projeto será muito útil na festa de assinaturas de chaves, onde uma pessoa não conseguira trocar chaves criptográficas com documentos fraudados.

A todos os membros da comunidade de código aberto que levam a sério o ESPÍRITO HACKER e utilizam essa força para o bem, aos que defendem a “LIBERDADE DA INFORMAÇÃO”, aos que me incentivaram desde 1998 em meus primeiros contatos com o Software Livre, aos que se sacrificam para divulgar informações, aos que amam os seus idealismos acima do capitalismo e que lutam para fazerem um mundo melhor sem prejudicar terceiros. A todos que defendem estes ideais, um MUITO OBRIGADO !



face

The openSUSE build service becomes more and more a victim of his success: building constantly more than 300,000 packages for more than 43,000 developers needs really a lot of build power! And build power means not only CPU! It includes everything that you can expect from an IT infrastructure:

Old hard-drives from OBS-workers

Old hard-drives from OBS-workers

  • CPU power
  • RAM (the more, the better)
  • Storage (temporary local, on the clients and also to store and distribute the results)
  • Network
  • electic power (and cooling, and maintenance, and manpower to maintain the hardware, …)

Thankfully our main sponsor SUSE allowed us now to buy some new hardware to replace some of the old machines that build software packages for over ten different distributions all day long.

The old (and note: some are still used) machines are meanwhile a mix of different hardware vendors and even self-constructed machines mainly use local hard drives to setup a fresh build environment for every new package. This comes with the cost of constantly failing hard drives, additional maintenance and meanwhile (compared to SSDs and other stuff) slow builds – even if the OBS developers already implemented a lot of caching mechanisms.

But now it’s time to look into the future: here is a picture of the new, unmounted machines (in the front), which replaced the old machines in the back:
old_vs_new_OBS-workers_unmounted
Each of the 2 unit machines contains 4 servers with 128 GB RAM and  (24 Core) AMD Opteron Processors. Thanks to the amount of RAM, they can setup the build environment completely in a tmpfs, which (together with the CPUs) should give the openSUSE Build Service a real boost in build performance.

While we are proudly looking on the build statistics and do some fine tuning of the setup, enjoy the two pictures below showing them mounted in their Rack:

OBS lamb workers mounted - backside OBS lamb workers mounted frontside
OBS lamb workers mounted in Rack (backside) OBS lamb workers mounted in Rack (frontside)

We will leave it to your fantasy to find explanations why the new machines are called “lamb”.


face
Заметка на память. Проверялось на Windows 7 Pro x64

После установки свод-смарт версии 13.6.1107.16524 и корректной настройки, клиент, подключаясь к серверу обнаруживает, что версия приложения на сервере выше и предлагает проверить обновления. Но при попытке обновиться, сообщает:

"
-> Подключение к БД: "serv-rwp, svod_smart"
    Невозможно определить состояние соединения с источником обновлений.
    Попробуйте повторить попытку обновления.

=================== Описание ошибки ====================
Не удалось привести тип объекта "System.Data.Common.DbConnectionStringBuilder" к типу "Keysystems.WCF.Common.ConnectionParams".
=========================================================="

Решение проблемы:
Скачиваем с сайта Министерства финансов Республики Коми (!) свод-смарт 15.4.1.22923 от  16.12.2015 (http://minfin.rkomi.ru/page/9315/) и устанавливаем вместо 13-ой версии. Настройки подхватываются автоматически. После завершения установки и появления на рабочем столе ярлыка для запуска клиента, запускаем его от имени администратора и начинаем процедуру обновления до последней доступной версии из базы данных.

face

Night view in Osaka, overlooking the Metropolitan Expressway.Night view in Osaka, overlooking the Metropolitan Expressway.

Keynote

First off, let me just say that it was such an honor and pleasure to have had the opportunity to present a keynote at the LibreOffice mini-Conference in Osaka. It was a bit surreal to be given such an opportunity almost one year after my involvement with LibreOffice as a paid full-time engineer ended, but I’m grateful that I can still give some tales that some people find interesting. I must admit that I haven’t been that active since I left Collabora in terms of the number of git commits to the LibreOffice core repository, but that doesn’t mean that my passion for that project has faded. In reality it is far from it.

There were a lot of topics I could potentially have covered for my keynote, but I chose to talk about the 5-year history of the project, simply because I felt that we all deserved to give ourselves a lot of praises for numerous great things we’ve achieved in this five years time, which not many of us do simply because we are all very humble beings and always too eager to keep moving forward. I felt that, sometimes, we do need to stop for a moment, look back and reflect on what we’ve done, and enjoy the fruits of our labors.

Osaka

Though I had visited Kyoto once before, this was actually my first time in Osaka. Access from the Kansai International Airport (KIX) into the city was pretty straightforward. The venue was located on the 23th floor of Grand Front Osaka North Building Tower B (right outside the north entrance of JR Osaka Station), on the premises of GMO DigiRock who kindly sponsored the space for the event.

Osaka Station north entrance.Osaka Station north entrance.

Conference

The conference took place on Saturday January 9th of 2016. The conference program consisted of my keynote, followed by four regular-length talks (30 minutes each), five lightning talks (5 minutes each), and round-table discussions at the end. Topics of the talks included: potential use of LibreOffice in high school IT textbooks, real-world experiences of large-scale migration from MS Office to LibreOffice, LibreOffice API how-tos, and to LibreOffice with NVDA the open source screen reader.

After the round-table discussions, we had some social event with beer and pizza before we concluded the event. Overall, 48 participants showed up for the conference.

Conference venue.Conference venue.

Videos of the conference talks are made available on YouTube thanks to the effort of the LibreOffice Japanese Language Team.

Slides for my keynote are available here.

Hackfest

We also organized a hackfest on the following day at JUSO Coworking. A total of 20 plus people showed up for the hackfest, to work on things like translating the UI strings to Japanese, authoring event-related articles, and of course hacking on LibreOffice. I myself worked on implementing simple event callbacks in the mdds library, which, by the way, was just completed and merged to the master branch


Tuesday
26 January, 2016


Michael Meeks: 2016-01-26 Tuesday.

21:00 UTCmember

face
  • Mail chew; wrote staff reviews, had FY2015 review meetings etc. Discovered a load of TDF and other mail in my (local) junk folder - I'd been wondering where half of some threads was going, interesting.
  • Pleased to see Lenny write-up the lovely Shared Editing work from Henry Castro, Ashod Nakashian and Miklos Vajna all of Collabora.
    Shared editing in Collabora Online between two users

    Its also great to see Marco Cecchetti's nice Impress export to SVG work (as initially mentored by Thorsten Beherens as a Google Summer Of Code (GSOC) project some years back) polished up, fixed and doing a great job of slide transitions and animation - nice work Marco (who we're blessed to have working with us too).

face

La elección de un entorno de escritorio es tan importante como la elección de una distribución de GNU/Linux.

zzwallpaper-free_as_in_fredom

Quizás ya has leido mi artículo sobre cual puede ser la mejor distribución de GNU/Linux para este 2016:

Y ya estás decidida a descargar el archivo ISO e instalarlo en tu equipo. Pero espera, antes quizás debes decidirte por qué entorno de escritorio escoger. El entorno de escritorio es todas esas decoraciones y barras, iconos y menús que salen en cualquier sistema operativo.

En GNU/Linux una vez más tienes muchas opciones, eres libre de escoger la distribución de GNU/Linux que prefieras pero con “el sabor” del entorno de escritorio que más te guste. Tienes opciones, primero echémosles un vistazo y después ya sí decide e instala GNU/Linux de una vez! :)

Este artículo es una traducción de un artículo publicado en inglés en la web de Linux.com y escrito por Swapnil Bhartiya. Gracias a él y a la web por permitir la traducción y difusión. El artículo original lo puedes leer en este enlace:

Los comentarios están abiertos al debate, a la participación y al aporte constructivo. Empezamos:

itux-Apple

GNU/Linux crea entornos de escritorio amigables para diferentes elecciones y opciones. Por ejemplo, hay muchas distribuciones basadas en Linux que usan diferentes entornos de escritorio entre los que puedes escoger. Aquí hay una selección de los mejores entornos de escritorio que podrás encontrar en el mundo de Linux.

Plasma

kde_leap42-1_startup

Considero a Plasma, el escritorio de KDE, es el más avanzado de los entornos de escritorio. Es el que tiene más características y el entorno más customizable que he visto nunca. Incluso Mac OS X o Windows no se acercan a Plasma cuando se trata de tener un control completo por parte del usuario.

También me encanta Plasma por su asombroso gestor de archivos, llamado Dolphin. Una de las razones por las que prefiero Plasma sobre los sistemas basados en Gnome es por su gestor de archivos. Uno de mis grandes trabas con Gnome es por su gestor de archivos, llamado Files, que no puede manejar tareas básicas tales como renombrado masivo de archivos. Esto es importante para mí porque tomo un montón de fotos, y Gnome hace imposible para mí el renombrar los archivos de imágenes. En Dolphin es coser y cantar.

Además, puedes añadir más funcionalidades a Plasma con los plugins. Plasma ofrece software increible, por ejemplo: Krita, Kdenlive, la suite ofimática Calligra, digiKam, Kwrite y muchas otras aplicaciones que son desarrolladas por la comunidad de KDE.

El único punto flaco del escritorio Plasma es su cliente de correo por defecto, Kmail. Es algo complicado el ajustarlo, y


face

Pour les amateurs voici la suite de l'interview réalisée récemment avec Bruno Friedmann.

La 1ère partie se trouve ici ; https://www.alionet.org/content.php?...runo-Friedmann


Partie 2/2



(pour info les 3 questions suivantes sont issues d'une utilisatrice débutante (nommée U ci-dessous) , je trouve ca intéressant car ca résume sûrement les questions les plus fréquentes


face


Καλώς ήλθες στον μαγικό κόσμο του openSUSE. Δεν είναι μια απλή διανομή που θα σε βοηθήσει να χρησιμοποιήσεις τον υπολογιστή σου. Εκτός του προφανούς, θα βρεις σίγουρα διάφορα εργαλεία να δουλέψεις αλλά και μια κοινότητα ατόμων που θα σε ενθουσιάσει. Αφού επέλεξες την διανομή openSUSE, μάθε λίγο για το πως θα το γράφεις σωστά (openSUSE) καθώς και μια μικρή προϊστορία. Πολλοί είναι οι άσχετοι που λένε ότι θέλουν.

Γιατί επέλεξες την διανομή openSUSE; Μερικοί λόγοι υπάρχουν στον Οδηγό προς ναυτιλομένους νέους χρήστες στο openSUSE.

Αρχικά τι έκδοση επέλεξες; Επέλεξες την σταθερή Leap ή την κυλιόμενη Tumbleweed; Να ξεκαθαρίσω λίγο το τοπίο.

1. Και οι δυο, είναι σταθερές εκδόσεις. Η κυλιόμενη δεν είναι η δοκιμαστική της σταθερής (όπως λανθασμένα πιστεύουν μερικοί). Όσοι έχουν δοκιμάσει άλλες διανομές, ΜΗΝ συγκρίνετε τις άλλες διανομές με την openSUSE.
Η Arch Linux είναι κυλιόμενη διανομή. Δεν αποτελεί δοκιμαστική έκδοση κάποιας άλλης "σταθερής" έκδοσης της διανομής.
Η Ubuntu LTS δεν αποτελεί την ανά 5 χρόνια σταθερή έκδοση του Ubuntu και οι ενδιάμεσες αποτελούν τις δοκιμαστικές. Η κοινότητα Ubuntu τις προωθεί εξίσου. Απλά είναι δυο εκδόσεις με διαφορετικό target group.

2. Η άποψη ότι αν βάλεις μια κυλιόμενη έκδοση να περιμένεις να σου σπάσει το σύστημά σου και μετά να βάλεις Ubuntu, την λένε συνήθως άτομα που δεν γνωρίζουν ή τους έχει πει κάποιος έμπειρος φίλος τους (που δεν έχει χρησιμοποιήσει άλλη διανομή εκτός από Ubuntu). Αρχικά για να έχει επιλέξει μια διανομή να έχει κυλιόμενο κύκλο, σημαίνει ότι τα προγράμματα-πακέτα, τα έχει δοκιμάσει σε ένα δοκιμαστικό αποθετήριο, έχει δοκιμαστεί και εφόσων δεν έχει παρατηρηθεί bug, τότε το βγάζουν στην κυκλοφορία στην "σταθερή" εκδοσή τους. Όσον αφορά το Tumbleweed, πριν μπει το πακέτο στο δοκιμαστικό αποθετήριο, δοκιμάζεται πρώτα από ανθρώπινο χέρι και στη συνέχεια απο μηχανή. Όταν μπει στο δοκιμαστικό αποθετήριο (Factory). Όταν δοκιμαστεί από πολλούς χρήστες και είναι ΟΚ για μαζική χρήση, τότε περνάει από έναν έλεγχο από άτομο και στη συνέχεια από μηχανή και τότε περνάει στο αποθετήριο της κυλιόμενης έκδοσης. Αυτό σημαίνει ότι έχει περάσει από πολλούς ελέγχους πριν βγει σε χρήση. Σε περίπτωση που το σύστημά σου έχει πρόβλημα, το πιο πιθανό είναι να έχεις κάποιο conflict με κάποιο από το hardware σου. Καλό θα είναι να κάνεις bug report για να διορθωθεί με την επόμενη αναβάθμιση.

3. Η διανομή openSUSE είναι ξακουστή για το KDE. Η τελευταία σταθερή έκδοση (που έχει την βάση από το εμπορικό κομμάτι της SUSE), έχει αναφερθεί με προβλήματα στο KDE Plasma 5 (αν το γράφω σωστά). Αντιθέτως το GNOME δουλεύει μια χαρά. Γιατί συμβαίνει αυτό; Μια εξήγηση μπορεί να είναι ότι εμπορική έκδοση SUSE που διατίθεται για desktop υπολογιστές, έχει ως βασικό γραφικό περιβάλλον το GNOME και αυτό έχουν δοκιμάσει και συντηρούν οι υπάλληλοι της SUSE (βέβαια το εμπορικό προϊόν διαθέτει διαφορετική έκδοση GNOME καθώς και μια custom έκδοσή του, σε σχέση με την έκδοση openSUSE Leap). Το KDE από την άλλη δεν το έχει αναλάβει κάποιος υπάλληλος της SUSE να το συντηρεί για το εμπορικό προϊόν SLED.
Αντιθέτως, η έκδοση Tumbleweed παίρνει τις ενημερώσεις επόμενων εκδόσεων, οπότε διορθώνεται πιο γρήγορα ότι bug υπάρχει.
Επομένως, εάν θέλεις να χρησιμοποιήσεις KDE, προτίμησε την Tumbleweed ενώ αν θέλεις GNOME μπορείς να χρησιμοποιήσεις και την Leap. Εάν πάλι θέλεις να χρησιμοποιήσεις άλλο γραφικό περιβάλλον, εξαρτάται εάν θέλεις να έχεις πάντα την τελευταία έκδοση που κυκλοφορεί ή αν σε ενδιαφέρουν μόνο οι ενημερώσεις ασφαλείας.

4. Σε περίπτωση που θέλεις να χρησιμοποιήσεις server, τότε πας καρφωτά σε Leap. Ο λόγος προφανής. Παίρνεις ενημερώσεις ασφαλείας καρφωτά από την SUSE. Ότι παίρνουν οι πελάτες της, παίρνεις και εσύ. Επίσης είναι και μακράς υποστήριξης, οπότε είναι ότι καλύτερο.

5. Πολλοί έχουν διαμαρτυρηθεί γιατί η Leap δεν έχει 32bit έκδοση. Στην χώρα μας έχουμε κατά κύριο λόγο παλιούς υπολογιστές και θα ήταν καλό να χρησιμοποιηθεί μια σταθερή έκδοση με μακρά υποστήριξη, πόσο μάλλον αν αυτά τα παλιά μηχανάκια τα χρησιμοποιούν για μικρο-server. Εδώ υπάρχει ένα δίκιο αλλά η τεχνολογία 32bit έχει παιθάνει. Όλοι προτιμούν την 64bit. Οπότε θα με ρωτήσετε, και εμείς με τα 32bit μηχανάκια, τι θα κάνουμε; Η απάντηση είναι Tumbleweed. Η κυλιόμενη έκδοση βγαίνει και σε 32bit αρχιτεκτονική. Θα βρείτε σε Live μορφή τόσο το KDE όσο και το GNOME. Η αλήθεια είναι ότι για να εκτελέσετε τόσο το KDE όσο και το GNOME θα χρειαστείτε τόσο επεξεργαστική ισχύη όσο και μνήμη. Οι 32bit υπολογιστές δεν διαθέτουν και τα δυο μαζί. Οπότε καλύτερα να επιλέξετε γραφικό περιβάλλον μεταξύ XFCE, LXDE, MATE, Englightenment. Προσωπικά προτιμώ το MATE αν και δεν υπάρχει ως επιλογή στον εγκαταστάτη. Πρέπει να το προσέσετε πριν πατήσετε το κουμπί εγκατάσταση και μετά την επανεκκίνηση να ανοίξετε το YaST και να δηλώσετε το MATE ως γραφικό περιβάλλον. Το Enlightenment είναι αυτό που καταναλώνει την λιγότερη μνήμη.
Παλιότερα υπήρχε ως επιλογή και το αποθετήριο Evergreen, έργο της κοινότητας που είχε στόχο τους servers. Συνήθως κρατούσε ακόμα 1-2 χρόνια μετά τον επίσημο κύκλο ζωής μιας έκδοσης. Τελευταία έκδοση που είχε ανακοινωθεί ήταν η 13.1 αλλά συζητούνται πολλά με την έλευση της Leap.

6. Ο μύθος της μακράς υποστήριξης...
Αυτό είναι μύθος για τους χρήστες. Η μακρά υποστήριξη χρειάζεται ΜΟΝΟ για επαγγελματικό σκοπό. Για server και για κάποιο παραγωγικό μηχάνημα, τα οποία θα σου αποφέρουν χρήματα.
Εγώ που είμαι τελικός χρήστης, θέλω να έχω πάντα το τελευταίο GNOME (μιας και μεταφράζω το GNOME). Παλιότερα χρειαζόταν να βάλω πχ αποθετήρια που είχαν τη νέα έκδοση GNOME και σε αρκετές περιπτώσεις το σύστημά μου χαλούσε. Άλλες φορές χρειαζόταν να κάνω αναβάθμιση σε νέα έκδοση της διανομής. Όσες φορές δοκίμασα σε Ubuntu και Fedora, χρειάστηκε να κάνω φρέσκια εγκατάσταση γιατί το σύστημά μου ήταν υπερβολικά αργό και άλλες φορές δεν άνοιξε καν. Μόνο στο openSUSE κατάφερα να κάνω αναβάθμιση από μια έκδοση σε άλλη χωρίς προβλήματα.
Ας μην πάρουμε εμένα ως παράδειγμα. Δεν έχω δει πολλούς χρήστες που μου λένε θέλω να βάλω την Ubuntu LTS και να την ξεχάσω για τα 5 χρόνια. Στο facebook έχω συναντήσει άτομα που βάζουν την LTS έκδοση την 14.04 την .3 (δηλαδή έχουν βγάλει ένα ενημερωμένο ISO για 3 φορές). Έχω δει μόνο έναν φίλο να έχει εγκαταστήσει την 8.04 στο laptop του και την πήγε τουλάχιστον 4 χρόνια. Η αλήθεια είναι ότι στα 2 χρόνια μέσο όρο, οι περισσότεροι επιλέγουν να κάνουν αναβάθμιση σε νέα έκδοση (είτε αυτή είναι η κανονική, είτε η LTS). Ο λόγος που το κάνουν είναι οι νέες τεχνολογίες και τα προγράμματα που συνήθως δεν έχει η παλιά LTS.
Στο openSUSE, η Leap έχει πάρει τον κύκλο ζωής του SLE. Τώρα είναι η έκδοση 42.1 που σημαίνει ότι συμβαδίζει με το SLE SP1. Όταν βγει το SLE SP2, θα βγει η επόμενη έκδοση 42.2. Αυτό γίνεται συνήθως ανά 3-5 χρόνια.

Αφού διάβασες όλα τα παραπάνω, εγώ αν ήμουν στην θέση σου, αυτές είναι οι επιλογές μου:

- Server: openSUSE Leap 42.1
- Παραγωγικό μηχάνημα Desktop: openSUSE Leap 42.1 με XFCE ή MATE
- Desktop/laptop: openSUSE Tumbleweed 64bit με GNOME ή KDE
- Παλιό 32bit Desktop: openSUSE Tumbleweed 32bit με XFCE, LXDE, MATE ή Enlightenment (εξαρτάται τον χρήστη)

Ότι και να επιλέξεις, μπορείς να ακολουθήσεις έναν οδηγό πρώτων ενεργειών μετά την εγκατάσταση του Leap.

Monday
25 January, 2016


Michael Meeks: 2016-01-25 Monday.

21:00 UTCmember

face
  • Up; mail chew; took E. to the doctors briefly. Back to plan the week somehow. Team meetings variously; dug at IPC debug, pretty groggy still. Up late with Tony & Janice.

face

There has been more discussion recently on the concept of a “10x engineer”. 10x engineers are, (from Quora) “the top tier of engineers that are 10x more productive than the average”

Productivity

I have observed that some people are able to get 10 times more done than me. However, I’d argue that individual productivity is as irrelevant as team efficiency.

Productivity is often defined and thought about in terms of the amount of stuff produced.

“The effectiveness of productive effort, especially in industry, as measured in terms of the rate of output per unit of input”

Diseconomies of Scale

The trouble is, software has diseconomies of scale. The more we build, the more expensive it becomes to build and maintain. As software grows, we’ll spend more time and money on:

  • Operational support – keeping it running
  • User support – helping people use the features
  • Developer support – training new people to understand our software
  • Developing new features – As the system grows so will the complexity and the time to build new features on top of it (Even with well-factored code)
  • Understanding dependencies – The complex software and systems upon which we build
  • Building Tools – to scale testing/deployment/software changes
  • Communication – as we try to enable more people to work on it

The more each individual produces, the slower the team around them will operate.

Are we Effective?

Only a small percentage of things I build end up generating enough value to justify their existence – and that’s with a development process that is intended to constantly focus us on the highest value work.

If we build a feature that users are happy with it’s easy to count that as a win. It’s even easier to count it as a win if it makes more money than it cost to build.

Does it look as good when you its compare its cost/benefit to some of the other things that the team could have been working on over the same time period? Everything we choose to work on has an opportunity cost, since by choosing to work on it we are therefore not able to work on something potentially more valuable.

Applying the 0.1x

The times I feel I’ve made most difference to our team’s effectiveness is when I find ways to not build things.

  • Let’s not build that feature.
    Is there existing software that could be used instead?
  • Let’s not add this functionality.
    Does the complexity it will introduce really justify its existence?
  • Let’s not build that product yet.
    Can we first do some small things to test the assumption that it will be valuable?
  • Let’s not build/deploy that development tool.
    Can we adjust our process or practices instead to make it unnecessary?
  • Let’s not adopt this new technology.
    Can we achieve the same thing with a technology that the team is already using and familiar with? “The best tool for the job” is a very dangerous phrase.
  • Let’s

Sunday
24 January, 2016


Michael Meeks: 2016-01-24 Sunday.

21:00 UTCmember

face
  • Up earlyish; NCC playing the violin - rather a good band. Home for roast lunch, and slugged by the fire much of the afternoon, getting a nasty cold; bed in a haze.

face

Nachdem oYoX immer unübersichtlicher wurde und zahlreiche WordPress-Plugins dafür gesorgt hatten, dass alles irgendwie überladen war, habe ich mich zu einer großen Putzaktion durchgerungen.

besen_putzen.jpg
Zuerst sollte ein neues, frisches Layout her: modern und mit übersichtlicheren Kategorie, damit man Einträge schneller findet.
Das neue Layout ist nun breiter und passt sich dynamisch an die Bildschirmgröße an. Auch ist es über Smartphones nun ohne das WordPress Plugin nutzbar, was zu einem einheitlicheren Design führt. Vor allem aber zeigt es die Informationen kompakter / übersichtlicher.
So weit so gut. Jetzt steht aber für die kommenden Tage noch einiges an Arbeit an: Alle Einträge müssen an das Layout angepasst bzw. vereinheitlicht werden. Ungültige Beiträge müssen aussortiert und auch das Download-System auf Vordermann gebracht werden.
Sollte es in den kommenden Tagen daher noch zu dem ein oder anderen „Schluckauf“ kommen, dann bitte ich um etwas Geduld.
Bild: Note: There is a file embedded within this post, please visit this post to download the file. …
ajax loader

face

This script was done in Python 3.4.3 for an exercise in OPS635.

Attempting to find the geographic location of an IP address can be quite frustrating with python on Linux. I had several issues attempting to get a library and database that were compatible with each other. Eventually I realized that the version of the database both openSUSE and Fedora ship were not compatible with the version of pygeoip that pip installs. To work around this, I had to manually download the GeoIP database from here and save it somewhere on the filesystem. Then I just told python the path to the database and everything seemed to work fine from there. Here is the script, feel free to use it as you wish:

import pygeoip

def ipLocator(ip):
    GeoIPDatabase = '/home/user/GeoLiteCity.dat'
    ipData = pygeoip.GeoIP(GeoIPDatabase)
    record = ipData.record_by_name(ip)
    print("The geolocation for IP Address %s is:" % ip)
    print("Accurate Location: %s, %s, %s" % (record['city'], record['region_code'], record['country_name']))
    print("General Location: %s" % (record['metro_code']))


face

Dear Tumbleweed users and hackers,

As the ones amongst you closely following Tumbleweed have seen without doubt, one of the snapshots was much larger this week than what is perceived normal. This is due to a human error, which triggered a full rebuild from the base stack up. Actually, not so many packages have had seen changes. On the other hand, though, I will have to slow down Tumbleweed checkins a bit, as OBS all repos building against openSUSE:Tumbleweed now got a much larger base to rebuild against, which in turn means OBS is way more busy than it normally already is. Our sincerest apologies for this. Of course we will try to deliver important updates as fast as we can.

Let’s still look what changed in the snapshots that were released this week (0115, 0116, 0117, 0120 and 0121):

  • Plasma 5.5.3
  • KDE Framework 5.18.0
  • Quite some YaST updates, mostly around AutoYaST profile validation
  • Libproxy 0.4.12 – Finally the crashes in Qt5 based apps are solved

What you can expect in the next snapshot:

  • KDE Applications 15.12.1 (will be in any snapshot >= 0123)
  • KDEPIM 4 is replaced as default with KDEPIM 5 (also in snapshot >= 0123); See the announcement made by the KDE Team on this topic

And things that are in stagin areas:

  • systemd/udev 228: (still pending on boo#960669)
  • gcc 5.3.1: YaST is the blocking factor, as it fails to build due to the use of deprecated auto_ptr
  • python 3.5.1 has been submitted
  • Kernel fix that fixes boot from USB again (something not caught by openQA)

None of the larger projects appear to be close to resolution anytime soon – which, on one hand, is a good thing, as it allows us to slow down Tumbleweed until OBS catches up without having a bad conscious.

Have a great weekend!


Saturday
23 January, 2016


Michael Meeks: 2016-01-23 Saturday.

21:00 UTCmember

face
  • Up lateish; poked at some customer logging bits. Lunch; out for a walk in the countryside outside Exning; nice. Back to NCC to get the quizz tables setup; popped babes home.
  • Annual Newmarket Pregnancy Crisis Centre fund-raising quizz - significantly over-attended; used all of the available tables; pinched another from the office - frantic setup & crazy folding of raffle tickets etc. Lots of fun, home late.

<- Current blog entries