Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Sunday
05 July, 2015


Michael Meeks: 2015-07-05 Sunday.

21:00 UTCmember

face
  • Off to a baptism at re:new and a shared lunch which was interesting, back in the afternoon slept on the sofa; pizza dinner, dancing competition on the trampoline, read stories; bed.

Saturday
04 July, 2015


Michael Meeks: 2015-07-04 Saturday.

21:00 UTCmember

face
  • Up lateish, J. returned babes to their abodes left & right. Out to Holkam beach en-masse; lots of wonderful sun, sea, sand, swimming, digging, relaxing, reading - sun-burning and so on. Fish & Chips in the car on the way home tired.

Friday
03 July, 2015


Michael Meeks: 2015-07-03 Friday.

21:00 UTCmember

face
  • Mail chew, partner call, admin, lunch, interview, poked at some profiles; got to a bug fix: fun. Innundated by an endless series of little girls - 8x or so, took a brace to Friday-Club, movies, sleep-overs, the works.

Jonathan Ervine: Loops

02:53 UTC

face

Running in circles Another run without earphones was completed last night – I’ve reverted to using a Polar chest strap hear rate monitor whilst the Jabras are in their non-working state. Relatively faster and slow interval running last night – at over 30C and high humidity it’s not much fun. Jabra have offered to check […]


Thursday
02 July, 2015


Michael Meeks: 2015-07-02 Thursday.

21:00 UTCmember

face
  • Into Cambridge to attempt to nurse some poorly prototype hardware in the nice cool server room (on a hot day). More (mobile) prototype hardware arrived by mail. Back home, ESC call.

face

The openSUSE project has finally decided to split the deprecated net-tools binaries from the core net-tools package and moved them into net-tools-deprecated.

The following tools were removed from net-tools and moved into net-tools-deprecated in openSUSE Tumbleweed

/bin/netstat
/sbin/arp
/sbin/ifconfig
/sbin/ipmaddr
/sbin/iptunnel
/sbin/route

You should only use the above deprecated tools if, and only if there’s an antiquated application or tool that you’re running requires them.

What should you be using in their place?

Deprecated ToolAlternative Tool
arpip n
ifconfigip a
ipmaddrip maddr
iptunnelip tunnel
netstatss
ip route (netstat -r)
ip -s link (netstat -i)
routeip r
mii-toolethtool
nameififrename

If you haven’t made the switch I highly recommend you do as the the above (along with mil-tool and nameif) have been deprecated for many years now per the net-tools Linux Foundation page.


Wednesday
01 July, 2015


face

It’s that time of the year again, it seems: I’m working on KPluginMetaData improvements.

In this article, I am describing a new feature that allows developers to filter applications and plugins depending on the target device they are used on. The article targets developers and device integrators and is of a very technical nature.

Different apps per device

This time around, I’m adding a mechanism that allows us to list plugins, applications (and the general “service”) specific for a given form factor. In normal-people-language, that means that I want to make it possible to specify whether an application or plugin should be shown in the user interface of a given device. Let’s look at an example: KMail. KMail has two user interfaces, the desktop version, a traditional fat client offering all the features that an email client could possibly have, and a touch-friendly version that works well on devices such as smart phones and tablets. If both are installed, which should be shown in the user interface, for example the launcher? The answer is, unfortunately: we can’t really tell as there currently is no scheme to derive this information from in a reliable way. With the current functionality that is offered by KDE Frameworks and Plasma, we’d simply list both applications, they’re both installed and there is no metadata that could possibly tell us the difference.

Now the same problem applies to not only applications, but also, for example to settings modules. A settings module (in Frameworks terms “KCM”) can be useful on the desktop, but ignored for a media center. There may also be modules which provide similar functionality, but for a different use case. We don’t want to create a mess of overlapping modules, however, so again, we need some kind of filtering.

Metadata to the rescue

Enter KPluginMetaData. KPluginMetaData gives information about an application, a plugin or something like this. It lists name, icon, author, license and a whole bunch of other things, and it lies at the base of things such as the Kickoff application launcher, KWin’s desktop effects listing, and basically everything that’s extensible or uses plugins.

I have just merged a change to KPluginMetaData that allows all these things to specify what form factor it’s relevant and useful for. This means that you can install for example KDevelop on a system that can be either a laptop or a mediacenter, and an application listing can be adapted to only show KDevelop when in desktop mode, and skipping it in media center mode. This is of great value when you want to unclutter the UI by filtering out irrelevant “stuff”. As this mechanism is implemented at the base level, KPluginMetaData, it’s available everywhere, using the exact same mechanism. When listing or loading “something”, you simply check if your current formfactor is among the suggested useful ones for an app or plugin, and based on that you make a decision whether to list it


Michael Meeks: 2015-07-01 Wednesday.

21:00 UTCmember

face
  • Another day packed with alternating between different bits of admin, mail review, transcript fixing, conference planning. Partner call, got to fix a bug - fun.

face

Juniper 8.0R5 includes support for 64-bit Linux systems. However, the process is a different on Tumbleweed and openSUSE 13.1 which I wrote about previously.

Juniper includes instructions via KB25230, however, their directions aren’t complete.

The following instructions will provide a functional Juniper VPN installation on openSUSE Tumbleweed:

Install prerequisites:

sudo zypper in -y libXi.so.6 libXrender1-32bit libXtst6-32bit net-tools-deprecated

Download 32-bit Oracle Java

Extract Java to a location of your choosing:

tar xzvf jre-8u45-linux-i586.tar.gz -C /home/ben/apps/java32

Update /usr/bin/java alternatives to use 32-bit

sudo update-alternatives –install /usr/bin/java java /home/ben/apps/java32/jre-latest/bin/java 100

Note: you must use Firefox as Chrome/Chromium on Linux has already deprecated NPAPI plugin support.


face

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

</svg>
<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);
}
</style>

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.


Tuesday
30 June, 2015


Michael Meeks: 2015-06-30 Tuesday.

21:00 UTCmember

face
  • Catch up with JRB. Alternated between different bits of administration, trying to intersperse the more horrendous stuff between the less dull bits. Partner call.

face

Running Unfortunately, it’s more running without heart rate monitoring earphones, as the fifth pair have now failed. I make that just about 5 pairs failed inside 5 months. I am not going to warranty replace these ones – there is quite clearly a flaw and I don’t really see the point in continuing to trek […]


Monday
29 June, 2015


Michael Meeks: 2015-06-29 Monday.

21:00 UTCmember

face
  • Up early, breakfast, mail chew, fun. Lots of 1:1 meetings, also fun; 2x team meetings, budgeting, invoicing, project reviews, SOW construction - lots of admin.

Sunday
28 June, 2015


Michael Meeks: 2015-06-28 Sunday.

21:00 UTCmember

face
  • Up lateish; Isleham for church - a lady talking on spiritual jubilee; back for a fine pizza lunch. Out for a walk in the afternoon to a small, nearby motte & bailey from ~1000AD. Back for tea, stories, bed.

Saturday
27 June, 2015


Michael Meeks: 2015-06-27 Saturday.

21:00 UTCmember

face
  • Very lazy day; up very early reading Vernor Vinge - what did Jeremy Allison do to me ? breakfast, slugged, read etc. most of the day. Interspersed with misc. home maintenance. BBQ in the evening.

Friday
26 June, 2015


Michael Meeks: 2015-06-26 Friday.

21:00 UTCmember

face
  • Mail chew, call with Mike, code reading, admin. Plugged away building & testing code variously. Maurizio over in the evening.

face
If you used the SUSE OpenStack Cloud 4 Admin Appliance, you know it was a downloadable, OpenStack Icehouse-based appliance, which even a non-technical user could get off the ground to deploy an OpenStack cloud. Today, I am excited to tell you about the new Juno-based SUSE OpenStack Cloud 5 Admin Appliance.

With the SUSE OpenStack Cloud 4 release we moved to a single integrated version. After lots of feedback from users it was clear that no one really cared that downloading something over 10GB mattered as long as it had everything they needed to start an OpenStack private cloud. In version 5 the download is over 15GB, but it actually has all of the software you might need from SLES 11 or SLES 12 compute infrastructure to SUSE Enterprise Storage integration. I was able to integrate the latest SMT mirror repositories at a reduced size and have everything you might need to speed your deployment.

The new appliance incorporates all of the needed software and repositories to set up, stage and deploy OpenStack Juno in your sandbox lab, or production environments. Coupled with it are the added benefits of automated deployment of highly available cloud services, support for mixed-hypervisor clouds containing KVM, Xen, Microsoft Hyper-V, and VMware vSphere, integration of our award winning, SUSE Enterprise Storage, support from our award-winning, worldwide service organization and integration with SUSE Engineered maintenance processes. In addition, there is integration with tools such as SUSE Studio™ and SUSE Manager to help you build and manage your cloud applications.

With the availability of SUSE OpenStack Cloud 5, and based on feedback from partners, vendors and customers deploying OpenStack, it was time to release a new and improved Admin Appliance. This new image incorporates the most common use cases and is flexible enough to add in other components such as SMT (Subscription Management Tool) and SUSE Customer Center registration, so you can keep your cloud infrastructure updated.

The creation of the SUSE OpenStack Cloud 5 Admin Appliance is intended to provide a quick and easy deployment. The partners and vendors we are working with find it useful to quickly test their applications in SUSE OpenStack Cloud and validate their use case. For customers it has become a great tool for deploying production private clouds based on OpenStack.

With version 5.0.x you can proceed with the following to get moving now with OpenStack.

Its important that you start by reading and understanding the Deployment Guide before proceeding. This will give you some insight into the requirements and an overall understanding of what is involved to deploy your own private cloud.

As a companion to the Deployment Guide we have provided a questionnaire that will help you answer and organize the critical steps talked about in the Deployment Guide.

To help you get moving quickly the SUSE Cloud OpenStack Admin Appliance Guide provides instructions on using the appliance and details a step-by-step installation.

The most

face
The next (virtual) Ceph Developer Summit is coming. The agenda has been finally announced for the 1.and 2. of July 2015. The fist day starts at 07:00 PDT (16:00 CEST) and the second day starts at 18:00 PDT on 2. July or rather 03:00 CEST on 03.July. 

I have submitted a blueprint to discuss and afterwards starting to working on the "CephX brute-force protection through auto-blacklisting" topic from my talk in Vancouver to improve the security of Ceph. But there are many other interesting blueprints on the list. You can find the full agenda and blueprints we will discuss here.

face

42Roadmap questions answered42

Deep thought and some additional core SUSE Linux Enterprise source code have given The openSUSE Project a path forward for future releases.

The change is so phenomenal that the project is building a whole new release.

Some people might be perplexed over the next regular release, but rather than bikeshedding the name over the next few months, for the moment, we will call it openSUSE: 42 after its project name in the Open Build Service. And we are going to explain the roadmap for this regular release.

openSUSE 42 is scheduled to be released around SUSECon, which is in Amsterdam this year from Nov. 2 – 6.

Unlike old releases, future releases of “42” are expected to align with the releases of SLE service packs and major releases.

There are about 2,000 packages in openSUSE 42 right now, said Stephan “Coolo” Kulow, release manager. Of course, many more are expected.

openSUSE 42 will be a  long-term type release with enduring updates and maintenance commitments by the community and SUSE.

Kulow said a milestone will be released soon.

“We have to come up with solutions as problems arise,” Kulow said.

There is currently no plans for live CDs, but he said expect other media formats to be added later.


Thursday
25 June, 2015


Michael Meeks: 2015-06-25 Thursday.

21:00 UTCmember

face
  • Up rather early; into London for a Westminster eForum on procurement, nice to see Whitehall in the sun. Met some interesting guys - curious to see Microsoft's view of the five year future including only rather trivial combinations of today's technology, whereas a question from Agiliysys on Artificial Intelligence automating beaurocratic tasks. Which is right I wonder.

face


After two days the first "Deutsche OpenStack Tage" ended. There have been many interesting presentations and discussions on OpenStack and also Ceph topics. You can find the slides from my talk about "Ceph in a security critical OpenStack Cloud" on slideshare.

Wednesday
24 June, 2015


Michael Meeks: 2015-06-24 Wednesday.

21:00 UTCmember

face
  • Mail chew, and more mail - does it never end ? lunch with Tony. More meetings, hackery etc. TDF board call. Fixed a trivial bug, isolated another.

Tuesday
23 June, 2015


Michael Meeks: 2015-06-23 Tuesday.

21:00 UTCmember

face
  • Into London for the Public Sector Show with Tim - not an ideal event - missed a couple of interesting people. Train home early. Mail chew on the train, more work at home, lovely dinner with the family. Bible study with Arun in the evening.

face

After more than two years of development, 15 pre-releases and more than 2000 commits we proudly present release 2.0 of the DocBook Authoring and Publishing Suite, in short DAPS 2.0.

DAPS lets you publish your DocBook 4 or Docbook 5 XML sources in various output formats such as HTML, PDF, ePUB, man pages or ASCII with a single command. It is perfectly suited for large documentation projects by providing profiling support and packaging tools. DAPS supports authors by providing linkchecker, validator, spellchecker, and editor macros. DAPS exclusively runs on Linux.

Download & Installation

For download and installation instructions refer to https://github.com/openSUSE/daps/blob/master/INSTALL.adoc
Highlights of the DAPS 2.0 release include:

  • fully supports DocBook 5 (production ready)
  • daps_autobuild for automatically building and releasing books from different sources
  • support for EPUB 3 and Amazon .mobi format
  • default HTML output is XHTML, also supports HTML5
  • now supports XSLT processor saxon6 (in addition to xsltproc)
  • improved “scriptability”
  • properly handles CSS, JavaScript and images for HTML and EPUB builds (via a “static/” directory in the respective stylesheet folder)
  • added support for JPG images
  • supports all DocBook profiling attributes
  • improved performance by only loading makefiles that are needed for the given subcommand
  • added a comprehensive test suite to ensure better code quality when releasing
  • tested on Debian Wheezy, Fedora 20/21 openSUSE 13.x, SLE 12, and Ubuntu 14.10.

Please note that this DAPS release does not support webhelp. It is planned to re-add webhelp support with DAPS 2.1.

For a complete Changelog refer to https://github.com/openSUSE/daps/blob/master/ChangeLog

Support

If you have got questions regarding DAPS, please use the discussion forum at https://sourceforge.net/p/daps/discussion/General/. We will do our best to help.

Bug Reports

To report bugs or file enhancement issues, use the issue Tracker at https://github.com/openSUSE/daps/issues.

The DAPS Project

DAPS is developed by the SUSE Linux documentation team and used to generate the product documentation for all SUSE Linux products. However, it is not exclusively tailored for SUSE documentation, but supports every documentation written in DocBook.
DAPS has been tested on Debian Wheezy, Fedora 20/21 openSUSE 13.x, SLE 12, and Ubuntu 14.10.

The DAPS project moved from SourceForge to GitHub and is now available at https://opensuse.github.io/daps/.


Monday
22 June, 2015


Michael Meeks: 2015-06-22 Monday.

21:00 UTCmember

face
  • Up early, music practices, mail chew, fixed embedded font bits for the headless backend. Endless meetings and 1:1's much of the day, worked on data for customers.
  • Pleased to see GNOME Documents integration starting to look screen-cast-able, thanks to great work from our GSOC student Pranav (with Miklos' mentoring).

Klaas Freitag: ownCloud Chunking NG

11:42 UTCmember

face

Recently Thomas and me met in person and thought about an alternative approach to bring our big file chunking to the next level. “Big file chunking” is ownClouds algorithm to upload huge files to ownCloud with clients.

This is the first of three little blog posts in which we want to present the idea and get your feedback. This is for open discussion, nothing is set in stone so far.

What is the downside of the current approach? Well, the current algorithm needs a lot of distributed knowledge between server and client to work: The naming scheme of the part files, semi secret headers, implicit knowledge. In addition to that, due to the character of the algorithm the server code is too much spread over the whole code base which makes maintaining difficult.

This situation could be improved with the following approach.

To handle chunked uploads, there will be a new WebDAV route, called remote.php/uploads.
All uploads of files larger than the chunk size will go through this route.

In a nutshell, an upload of a big file will happen as parts to a directory under that new route. The client creates it through the new route. This initiates a new upload. If the directory could be created successfully, the client starts to upload chunks of the original file into that directory. The sequence of the chunks is set by the names of the chunk files created in the directory. Once all chunks are uploaded, the client submits a MOVE request the renames the chunk upload directory to the target file.

Here is a pseudo code description of the sequence:

1. Client creates an upload directory with a self choosen name (ideally a numeric upload id):

MKCOL remote.php/uploads/upload-id

2. Client sends a chunk:

PUT remote.php/uploads/upload-id/chunk-id

3. Client repeats 2. until all chunks have successfully been uploaded
4. Client finalizes the upload:

MOVE remote.php/uploads/upload-id /path/to/target-file

5. The MOVE sends the ETag that is supposed to be overwritten in the request header to server. Server returns new ETag and FileID as reply headers of the MOVE.

During the upload, client can retrieve the current state of the upload by a PROPFIND request on the upload directory. The result will be a listing of all chunks that are already available on the server with metadata such as mtime, checksum and size.

If the server decides to remove an upload, ie. because it hasn’t been active for a time, it is free to remove the entire upload directory and return status 404 if a client tries to upload to. Also, a client is allowed to remove the entire upload directory to cancel an upload.

An upload is finalized by the MOVE request. Note that it’s a MOVE of a directory on a single file. This operation is not supported in normal file systems, but we think in this case, it has a nice well descriptive meaning. A MOVE


Sunday
21 June, 2015


Michael Meeks: 2015-06-21 Sunday.

21:00 UTCmember

face
  • Up early; off to NCC to practise with Jackie & Peter. Played in the service, Tony spoke about service. Good to catch up with many afterwards. Back for a big BBQ lunch with various creative Father's Day cards - lovely.
  • Practiced various quartets; needs individual practice too it seems. Out for a family bike-ride, fun. Played lego house building, read the start of Rendezvous with Rama to babes generally, long stories & bed early.

Saturday
20 June, 2015


Michael Meeks: 2015-06-20 Saturday.

21:00 UTCmember

face
  • Up lateish; fixed a LibreOffice bug, pushed last night's fix; breakfast. Carpet & floor-boards up to see if there is a gas-connection near the fireplace, eventually located one. Out to buy and fit a new toilet seat - now with magic dash-pot to retard closing: hopefully more robust than the previous version.
  • Emily over in the afternoon, watched Heaven is for real - spiritual candy-floss; enjoyable - while building your life on something more rocky. Late-night bug triage / chasing.

face

Motivations


Creating high quality contents takes time. A lot of people write nowadays but very few are writers. In the software industry, most of those who write very well are in the marketing side not on the technical side.

The impact of high quality contents is very high over time. Engineers and other profiles related with technology tend to underestimate this fact. When approaching the creation of contents, their first reaction is to think about the effort that takes, not the impact. Marketers have exactly the opposite view. They tend to focus in the short term impact.

Successful organizations have something in common. They focus a lot of effort and energy in reporting efficiently across the entire organization, not just vertically but horizontally, not just internally but also externally. Knowing what others around you are doing, their goals, motivations and progress is as important as communicating results

One of the sentences that I do not stop repeating is that a good team gets further than a bunch of rock stars. I think that a collective approach to content creation provides better results in general, in the mid term, than individual ones. Specially if we consider that Free Software is mainstream nowadays. There are so many people doing incredible things out there, it is becoming so hard to get attention....

Technology is everywhere. Everybody is interested on it. We all understand that it has a significant impact in our lives and it will have even more in the future. That doesn't mean everybody understands it. For many of us, that work in the software industry, speaking an understandable language for wider audiences do not comes naturally or simply by practising. It requires learning/training.

Very often is not enough to create something outstanding once in a while to be widely recognized. The dark work counts as much as the one that shines. The hows and whys are relevant. Reputation is not in direct relation with popularity and short term successes. Being recognized for your work is an everyday task, a mid term achievement. The good thing about reputation is that once you achieve it,  the impact of your following actions multiplies.

We need to remember that code is meant to die, to disappear, to be replaced by better code, faster code, simpler code. A lot of the work we do ends nowhere. Both facts, that are not restricted to software, doesn't mean that creating that code or project was not worth it. Creating good content, helps increasing the life time of our work, specially if we do not restrict them to results.

All the above are some of the motivations that drives me to promote the creation of a team blog wherever I work. Sometimes I succeed and sometimes not, obviously.

What is a team blog for me? 

  • It is a team effort. Each post should be led by a person, an author, but created by the team.
  • It focuses on what the team/group do, not on what

face

After fifty odd years my big toe toenails decided that deforming themselves would be a jolly wheeze. Like most men I figured leaving it was a brilliant idea and that self diagnoses “Tell you what, I won’t cut them and they’ll grow out!” was the answer. The nails were becoming pretty lethal, cutting their own escape routes from any pair of socks that dared to try and contain them. After getting through most of Marks and Spencer’s sock inventory I decided enough was enough and visited my doctors for the first time since 2011. After being told “You’re a little overweight.” which was a tad cheeky given he hadn’t even weighed me, I was given an appointment at the Podiatry clinic to have the offending nails removed.

The waiting area left a lot to be desired, effectively it was some seats against the wall outside the lift! My daughter and I sat there not without some trepidation I hasten to add when the air was pierced by what was evidently a girl screaming “Aaaaaaarrrrrrrrrrrgh” what little blood was sloshing around in my cheeks left my face and I assumed the pose of a scared rabbit caught in the headlights. The girl in question appeared a few minutes later sporting a large bandage on her big toe but clearly relieved it was all over. “Mr Cannon?” I tried to pretend that my name was in fact Yul Brynner but the Podiatrist wasn’t fooled because he’d met me before. A very nice nurse with cherry red hair and apparently formerly a Goth moved towards my toes with what looked like a whale harpoon! “Now Mr Cannon, I’m going to inject your toes in four places, here, here….” I held up my hand, “Any chance I can have gas?” I’m not going to lie, the first three smarted a bit, the nurse had said I could swear if I needed to but I’d promised myself I’d be a brave little soldier. “I’m just going to do under the toe, now it is a fairly big nerve.” said the nurse. SHIIIIIIITING HELL!” sadly I let myself down.

The removal of the nails took no time at all and I felt nothing. The Podiatrist kept asking it I wanted to look, sadist. I politely refused  although I did agree to look at the nails after he had used half of the cotton industries yearly output on my toes “Here you are then Mr Cannon, oh let me just remove that bit of flesh before I show you your nails.” while choking back the contents of my breakfast he proceeded to waft said gnarled up talons in front of me. “Now here’s is your paperwork, go and see your practice nurse tomorrow and she will change the dressing and give you some dressings so you can look after your wounds for about two to three weeks.” he said. Now this part is important

Older blog entries ->