Operating system updates for pet servers

Junio 26th, 2017 | Opinión, Trucos y Tutoriales

In a small company with a small team handling few but solid products, a number
of spawned projects for customers, and fleets of cloud machines provisioned for
serving the former; automation is key, even though not always necessarily
silver bullet. True complete automation reached in all technology operations is
utopia, or fake. Do not ever trust such claims.

Any operating system update can break the software, or the operating system,
even the underlying server in extreme cases. If you still have not found the
case, you better start applying operating system updates.

Completely automated information delivery is a different and healthy thing, for
the most part. Everything reducing fog of war in the modern company is key to
success, and maybe not as much recommended as it should be, when compared to
the continuous integration and continuous delivery groupthink of the second
half of the 2010s; not writing about the justified advocacy of the first half.
Current team and collaboration web application madness is good. Proper
resources discovery policies are gold.

Of course the modern platform thinking and current business constraints forces
us to treat servers as cattle, as everybody else, and some the information am
giving will not apply to servers such as web, application, data base read
replica servers or under certain safe k failure conditions elements in a read
write data base cluster. All of these am referring jointly as the pack.

Think more of a server that is ultimately responsible for the complete build of
absolutely everything. Or maybe a temporary architectural single point of
failure for convenience. Or instead something not critical at all, maybe not
even critical enough to be in the build and server orchestration
configurations, like to be expired proceedings for expiring customers. Think
about the phased death of biicode. Few companies disappear from dusk till dawn,
fewer technology platforms, even fewer in the case of technology platforms
intending becoming a build toolchain component. Bear in mind every single
company is doomed to disappear or else being intervened or merged. I just long
not to be there already when it gets to happen.

The pack action by default is just bring up a fresh machine and wipe out the
previous. For each case meriting distinction from the pack, a proper balance
must be reached when deciding update patching policy, between reaction speed to
the dreadier menaces, platform stability guarantees, low traffic hours, if any,
and operator time and involvement in the assessed stages of the proceedings, if
any.

It does not really matter the degree of operator involvement with patch
applying you could want to use, like it or not, either way you are going to
know what your mileage is needing, and you are going to do what is cheaper in
your mileage.

Proper information delivery procedures are, instead, a must go. It is actually
quite simple, have here a snippet for servers info delivery:

You get to have a proper server discovery procedure first to populate your
server portfolio. That gets up to you, out of the scope of this article.

$srv_portfolio GETS RESULT OF CALL TO $populate_srv_portfolio

FOR $server IN $srv_portfolio DO

  (*   '$type' is probably kind of an enumerated type, pun intended. *)
  $type GETS RESULT OF CALL TO $assess_type PASSING $server

  CALL TO $stack_msg PASSING RESULT OF
      CALL TO $check_for_updates PASSING $server AND $type
DONE

(*   Here is where you do IRC instead of Slack,
 * for it is free so you won't get sold by. *)
CALL TO $deliver_msg

That is all. Keep it simple. Nice pride ’17, stay safe, and do not take
anything for granted. Please do the challenge if you already have not.

U.P

Continúa leyendo este artículo

Why and how to use UTC

Febrero 13th, 2017 | Opinión, Trucos y Tutoriales

And other intelligent devices and inventions like using UTF-8 whenever can be
used, and why not forget nor close thyself to other formats like official time
or ISO-8859 formats.

Previously it has been written about official time zones. Their use have been
deprecated. They have been ridiculized, as being a standard based upon the
familiarity of a system that was a great advance on its day, but now not only
is not useful, but a drag for technological advance and human health. This is a
literal reading.

What is actually intented is just the internalization that official time is an
arbitrary instrument from a time in which it was necessary to solve several
problems, that today can be solved better using alternative procedures,
therefore its use could be stopped, if we wanted. It will be stopped, when the
proper time for that comes. That internalization, openness of mind, is an
important exercise, came to help overcoming the imminent generational gap
between people already in the workforce today and those who are little
children (native in perpetual war against terrorism, native in mass
surveillance and geolocation, native in ubiquous and ‘of things’ Internet) when
confronting time management problems with a brain different in plasticity.

This internalization does not mean the action to be performed should be
radical, maybe we even prefer anchor to our customs because having crystalized
the part of our brains that manages it, but now already warned in order to be
able to understand the future; in many years from now, people critizicing this
little part of the system.

GMT (Greenwich Meridian Time) is one part of those customs, but special,
different from the remaining. Because, not joining discussion about if it is
technologically a redefinition of UTC or is UTC that is defined as a series of
operations to be performed over GMT, in its essence UTC is like it is very much
because of GMT being like it is, and it is similar because of the time
standardizing process started when the United Kingdom (where Greenwich is) was
the greatest of the great powers.

As introduced, UTC is very important. In the globalized world, time
synchronization is most important than ever before. When UTC does not get used
problems often arise. Think about some potential cases ahead:

The potential average consumer of data from social media companies’ APIs most
probably will not care which is the time of the day information gets published.
We are thinking about this prototypical online news (or whatever) source. Now
think about the typical naive but sophisticated startup focusing on applying
machine learning procedures over data from social media. Companies like this
are kind of a plague and arise in numbers just by scratching the Internet. Say
those are the remaining 20% consumers of social media APIs that actually do
care about local time of publishing. Those kind of initiative needs applying
kind of a procedure to turn moments of airing UTC encoded into local time.

Guess what? Not only publisher local time is missing from social media APIs.
Even though there can be no doubt global social media work embracing UTC
internally, paradoxically UTC is completely missing from their surface APIs.
But UTC is actually a big part of the solution to this problem, no matter the
procedures. Turns out so hated damm and obsolete official time zones,
disrespectfully anti human, can be leveraged for some profit.

Enough about the good part of the good story. There are potential not so
successful cases. Have a look on this one: greeks unexpectedly using local time
(UTC+02:00).

Winter at last came, but the worst is yet to come. Some people in a work group
in Greece send data to another work group, including dates, not labeled with
any time zone. It is assumed that everything is in a corporate environment in
which data comms are obviously using UTC, until the day comes of data having to
be transmitted in real time, discovering with horror data is coming one hour
ahead CET, local time zone of the receivers at that moment, instead of one hour
behind, as would correspond to UTC, which everyone in random company would have
expected as the only reasonable default. So two hours ahead UTC, the time zone
in which is based all business logic core software, zealously tied to UTC
during development.

A comms problem happened, and some people had been sending data in their own
local time, for a while, and that means lost time (pun not intended), if it can
not be recovered. To recover, it has to be taken into account not only time of
people are different from that of other people, but data is also tied to each
implicated time zones and daylight saving schemes, that are driven by
government, so they might not be the same and here human (broad sense)
anarchism joins the play. Even so, this all is the good part of the potentially
bad story.

Now for the bad part of the bad story. A system in Madrid, commissioning a
proposal of routing to a component outsourced or being held to Greece, must
move an autonomous vehicle in Germany, but starts moving with a five minutes
delay. Turns out an engineer supervising the operation has restarted the
application now including a patch to override dates, because the car was
discarding frames it was receiving referencing a future that not even knows
will exist. This would be the last time that team did not use assertions over
data.

All this mess applies to ISO-8859 character encoding sets exactly the same,
with potentially terrorising stories about multiple chaining of badly applied
conversion stages of character encoding. Those would be very similar in
essence to the introduced potential cases.

U.P

Continúa leyendo este artículo