perjantai 31. heinäkuuta 2009

Enhancing thought

What on, today?

That's a hard question. If you are looking for a specific event or happening somewhere
you have to know it by name. You can however drop to the page accidentally or through
search machines, and discover it that way. What's tricky in the virtual world is
that our knowledge of things depends on our navigational capability and capacity.

Capability is the skills to use internet. The woven web is not perfect yet, and
it will probably always have some of these inherent structural problems. Think of
the web as your ordinary asphalt road. It's worn and never quite up to date, yet
you can drive on it.

Capacity is about the theory that there's only a certain amount of information you
can manage with. For example in sociology the is the Dunbar's number, roughly 150, a measurement of the "cognitive limit to the number of individuals with whom any one person can maintain stable relationships" (Wikipedia).

There are probably similar kinds of limitations to processing that happens in our
brain. Though people vary a lot. I would think that the information processing
capacity is a sum of several factors; memory, interest, motivation, physical state, experience, environment, etc.

We're living in highly information-centric world. There's a lot of really nice systems that one can use. Spreadsheets are invaluable help (they were once a breakthrough application, for all business people).
Search engines are everyone's basic tools. All kinds of data exists on the web.

How I would tackle the information age is by starting to think over from the scratch what kind of elementary routines could be automated. For example; often it happens that
you're in a flow, writing an article, coding, or reading something. Then there's a
sudden stop: you have to do calculation. Just figuring out how to get a calculator
takes time. Of course you can spawn Calc in Windows or run Excel, or any other
mathematical tool for that matter. But what if the maths would just come into your
mind? The answer would be there. You wouldn't have to put consciouss effort into it!

Action is based on knowledge and intuition. We as a human work only if we have a direction
from the brain; there's an action potential coming from our central nervous system and telling us that do this, do that. What the brain wants is some level of certainty of the outcome of this action. Ie. human behaviour is subjectively rational.

I'm sure the cognition will be enhanced somehow, non-invasively in the future.
Some primary operations can be "cached" or calculated and you don't even know it was
done. You feel or see it as the results; the work is bypassed, and thus your
capability to figure out and think things will be greater.

torstai 30. heinäkuuta 2009

Straining the brain - new interfaces

What is the modern equivalent of being a stronger individual?

I think it's the capability and will to command more user interfaces. Because
there's always a utility value for a certain interface; you can get things
with less (physical) effort, for lower price, from a better pool of choice etc.

So extending your mind with new services is generally good for you.

But acquiring or adopting a user interface - ie. subscribing to a service, or
registering to a new web page - requires cognitive processing. And sometimes
it feels like there's the "not yet another" -thought; you just aren't sure whether
your brain can take one more interface, which has to be learned and kept up with.

Facebook is a great example. Without it, I wouldn't know even half the social
gossip and in fact hard knowledge (news) of the world. I'd be quite in the dark,
to be honest. It's just such a powerful invention. Now the latest addition I
discovered was an application called Zimride. I can search for trips, driven
by friends and unknowns, in my neighbourhood. When everything matches, we
can carpool. In exchange for petrol money, I will get an otherwise free trip.
It's great!

The applications themselves are enabling or limiting our abilities. If there's
a bug, say a web service doesn't work well for certain set of browsers, then
it has an immediate impact on the whole user population. Thus it becomes
even more important that the operations run smoothly.

But a hindrance doesn't have to be a bug; it may as well be the very design
of an application. I fill out my work time into a web app, that doesn't let
me fill several days as a series; instead each and every day has to be
individually punched- meaning that I fill start time, end time, kilometers,
description, and the date itself. So instead of filling 20 similar work days
with one go (taking 30 seconds), I will be filling them piece by piece, taking
20 times as much.

Perhaps current
development pace is a little bit unhealthy, and there is to an unlimited
extent the challenge to make services compatible also with each other. The true
power is in the possibility to combine knowledge and functionality
from an application to another. But this combination of web services is
just beginning; what may be coming will probably surprise us all.

Google - the satellite - and opportunistic business possibilities

I started to think of in an abstract what Google represents. It's a kind of
like a satellite. Google can "see" the world wide web from 50,000 feet. It
can zoom in indefinitely, yet handle the overall picture as well. What
Google still lacks is certain time resolution; this is a technological
challenge.

Satellites probably were a really strange thing in the beginning. Today
they're used in many environmental research issues, and providing
us with GPS location. They have become very everyday technology. Our
maps are updated using satellite technology.

I was standing today along a motorway bus stop and trying to listen to
iPod. The ambient noise was overwhelming. Cars were zooming by at
around 100 km/h. Approximately 1000-1200 cars passed me before my bus
came. What if just one would have picked me up? But why on earth?
Because
a) we shared a similar route
b) it would be useful
c) I would pay for the petrol
d) the car would be better utilized
e) we could become friends, or at least exchange refreshing thoughts

Tick all of the above. :-)

And check out Zimride.

P.S. it's also on Facebook, as an app.

keskiviikko 29. heinäkuuta 2009

Traffic in the year 2015 (fictional scenario)

It's pretty easy to predict the future. Just use your intuition and go!

Trends
- traffic changes into two directions: individualization but also more effective
collective traffic (mass transport)
- new forms of transportation emerge: slow low-power cars, due to all the green movement going on now as we are closing year 2010.

Mass transport will get more comfortable and efficient. The comfy -factor is
satisfied partly by greatly increased availability of real time traffic
information. People can track via their mobiles the movement of their
buses, trams, subways and other traffic vehicles. All of these are tracked
by either GPS or other kinds of technology. Their location is known to within
sub-meter. So the guesswork is taken out of using mass transit.

People want to be the king in the car. This is true for at least a portion
of us. The car will be equipped with more intelligent and rewarding
technology. Cars themselves will already be quite high-tech when they're
shipped off the factory.

What is wise to put into the car, and what can we rely on the network to supply?
I was thinking about in-car weather sensor system, but perhaps that's an
overkill. Since the 4G network is probably obsolete by 2015, replaced by even better
network, we can be sure that there's plenty of bits available to the car.
It can download a constant high-accuracy weather map along with whatever
data needs to be had.

What do you need on the road?
- news
- weather
- traffic data
- social network services; you want to meet others, especially if you're traveling
in a more remote or dangerous area
- maintenance
- police (in case there's an disputed accident)

Cars will be equipped with always-on cameras, that record all the things you see.
So insurance disagreements should be past days.

The navigation systems get better. Search engine companies start to develop their
own navigation software, and so cars have devices in which the driver or passengers
can search for restaurants, events, other people - basically whatever is available
on the internet, can be plotted on the navigator's globe.

By having this information at hand the safety of roads increases somewhat.
Accident information is quickly related by peer network; people can enter
an accident scene in the navigator and quickly disseminate this information
to other drivers. Emergency call centers also update the information as
it arrives to them.

Individualization and the need to equip
* people want traveling to be as comfortable as possible
* time is essential, yet some proportion of the population is ready to slow down a bit
* automatic safety devices increase both within the vehicle and in traffic controllers
* land traffic becomes more like the sea traffic; fluid

Traffic controllers can create bumping zones, so the car will get instructions
to slow down when progressing towards a red light and especially a traffic
light in which there are people nearby. The traffic lights can have ambient
heat sensors (infrared) which detect the presence of people, and thus can
quickly make efficient decisions about signalling the lights.

All traffic lights communicate with a central system so that the whole traffic ecosystem of a city is optimal all the time. The lights are intelligent, so there's no more idling - waiting for no-one.

Technical advances
* friction reduction
* lower emissions
* biofuel consumption instead of fossil fuels

One specific thing I have to envision; it's the new low-power car. This tri-cycle
has a very aerodynamic design, and the car can take 2 people. It looks a bit like
the Spirit of America, a record-breaking threewheeler driven by Breedlove.
This vehicle will be as optimal as possible what comes to energy efficiency. It could be a kind of trend-setter for the trendy commuters, who want their own space nevertheless.

maanantai 27. heinäkuuta 2009

faster preliminary GPS location by synching devices

Since there's going to be time that we have at least half a dozen Bluetooth-enabled devices with GPS capabilities in any given 100m radius area, then why not use this information
concentration to benefit all?

Usually a single device has some trouble getting GPS signal,
and the initialization takes time; sometimes it's just 5-10 seconds, but it may well
be in the order of several minutes in worst case. The idea is that devices could poll others to give them information about the correct location. A single device could then
average the given answers and find out the location to a certain degree of certainty and
resolution. By filtering out the most deviant answers, devices could protect themselves
from being fooled.

Helsinki and the possibilities of 4G

It was lunch time. I took a walk down the Helsinki center, and for a moment stopped to just look and think. Pressing the button to make my camera record the moment, I thought about how would a technology like 4G change our city?

People, traffic signs, rails, concrete, smiles, curiosity, tourists, shops, signs, ads, and birds. Objects either being stationary or moving about in the sweet city porridge. What essentially the 4G technology brings is more power to consume information. It enables people to use much more powerful mobile services in the blink of an eye. Of course it remains to be seen how the implementation takes place, and whether there are technical problems with bandwidth quality or other factors. The history tells us that at first, yes. But in the long run, the diamond will be polished.

What services, you ask! A simple thing and a good example is the map. You know that tourists
are always being picked up from crowds since they wave biig paper maps in front of them. It is a good appliance, in fact. Nothing so far beats the legibility of the traditional map. Yet it contains always outdated information: once you've printed it out, the
data stays there. It doesn't reflect changes in the real environment anymore. Now electronic maps are completely different. They excel at providing up-to-date information.
They point you into interesting POIs, they show you experiences other people have had.
You can check out even pricing and contact information from an electronic map.

As cellulars have become computer-like, they require updates. I was walking today
the last 500m to work, and trying to update my phone's application. It took too much
time and trouble, so I didn't manage to do it eventually. With the expected increase
of 30..100 x current speeds, updating a program should be very easy.

But the real question is, will people benefit from the technology? It does give out
all the keys for improved user experience including better speeds. It remains to be
seen. I sure keep my thumbs up for this technology.

sunnuntai 26. heinäkuuta 2009

Changing face of IT and what we do with it

There's a common phrase that technology (referring to IT) is just going forward so fast
that we can't follow it. There is a hint of truth in this. I've witnessed the evolution
of home computers from the 1980s onwards, and in some point I kind of lost interest
in the intricate details of what a certain desktop or laptop contained. Vendors pushed
so much real buster and less-than-important products on the market that you had to
be looking at the news at least an hour daily.

It was enough for me to know that a computer ran programs well. What kind of chipsets, memory types,
or processors were inside I felt was quite unnecessary information.
Well, quite often I'd run into situations where the computer that I had bought did not
answer my expectations. What was often the case was that some little extra hardware
or feature was missing; which caused problems in interconnecting devices, or for example burning a CD. So I should have been a lot more conscientious. This certainly taught a lesson -
even though technology might seem very user-friendly and forgiving on the outside, it's
quite the contrary. The bigger bets we're playing with, the more accurately we have to place
them. There's no "fuzzy logic" yet with hardware. It's all pure boolean logic.

At one point the marketing of technology started to shift. It went from talking
figures to talking what a certain technology can really achieve - which was fine actually,
once you got over the feeling that the text contained a lot of syrup. As an example,
let's take a big chip manufacturer. What really happens behind the veils of R&D, is that
bit handling inside the processor is made more efficient. The technologies involved
are pipelining, multithreading, locking, branch predictions, etc. All of these are
a world of their own - really complex issues where academic and business research is
constantly churning out better results. But what the marketing division told the
chip maker was: "Talk about multimedia experiences. Talk about sharing nice memories
with photos. Talk about making life smooth." So it became hard to see what the
real, measurable differences between competiting processors were.

In IT there's the need to create bigger spheres of abstraction. Administrators want
to control bigger systems with fewer keyclicks and mouse button presses. It's kind
of megalomanic, to be honest. But the systems always remind of the need to pay
attention to minute detail: often there's a single bottleneck that can be overlooked,
like power supply. Or the fact that somebody might accidentally stumble upon a
network cable and break the connection.

Computers and technology in general do make our life smoother. It's by combining
the increasing raw power and process innovations. And with process I mean the very
life processes that we use: messaging, calculating, gathering information,
making decisions; invoking services (pizza, taxi, ordering equipment, etc.) and
sharing knowledge of the world.

We've only begun the trip.. Enjoy!

The coming 4G network looks good!

Mobile phones are basically computers that operate over radio networks. The base stations
are one important component which create the universe of possibilities of what the
mobiles can do. Depending on the generation of the system, different capabilities are
present.

The first mobile network, so called 1st generation, could relay speech from one mobile
phone to another. The speech was analog, so there was no analog-to-digital conversion.
Because of the speech signal being totally analog and without encryption, all conversations
could be easily eavesdropped. The NMT network operated in the 900 MHz frequency. When a
standard is set, the frequencies usually have to be internationally locked. In the coming GSM standard this standardization (excuse me the language) was a key player. Without it,
the flexible use of mobile phone around the world would not have been possible.

2nd generation system was essentially digital. Speech was digitized, and the bits were
compressed using very sophisticated algorithms. Because of compressing, a mobile phone
would save bandwidth. A mere 9.6 kBps or 9600 bits per second is sufficient to relay
speech. In addition to compression, the signal was encrypted so that eavesdropping
is not possible.
The 2nd gen. system was designed to have open standards, which improved the
market penetration. Since telecom companies could freely choose the equipment providers,
costs were kept at bay. GSM is a global success.

3G or third generation network increased the capacity of traffic between a phone
and the base station. Of course as with any new technology, the mobile phones have
to be specifically tailored to have 3G enabling chips.

The 4G network is a collection of new technology, but it essentially builds on the 3G.
There are interesting benefits what the new infrastructure can provide. A basic but often
needed thing is the increased transfer speeds. And we're not talking just 2x speeds, but
an estimated 30x ... 100x increases.

How has the culture of mobility evolved?
At first mobile phones were really bulky. A Motorola Dynatac 8000x mobile weighted
794 grams (28 ounces). Current phones in the year 2009 are a little bit over 80 grams;
take Nokia 6120 Classic, which weighs 89 grams. There are probably models which are
even less. Only people with "true need" had mobiles; police, firemen, people who
needed to receive orders or customer phone calls on the move. Then the phones
began to move into mainstream direction. Nowadays in 2009 you are an exception
if you're carrying a mobile phone in your pocket. Some people abstain for a reason.
Some politicians are struggling with the amount of calls and messages they receive.

The culture of mobility is wide concept. There have been both practical and social
factors in shaping the culture. Social factors are by no means insignificant. In some
countries it may be considered extremely rude that one talks to a mobile phone. And in
other the context dictates a lot. It's considered bad behaviour to talk to a mobile
in classroom, meetings, the church, cinema, or in the presence of your spouse - in
general, where talking may interfere with people's concentration and mood.

The evolution of mobile phone use is tied in with the technology that phones offer.
First mobiles were very simple; they basically had the keypad and possibility to make
phone calls. The screen is black-and-white, with a poor resolution. Viewing images
or doing anything else was considered unnecessary. Then the evolution of the
phones started to bring whole new kind of features:
* email
* calculators and other utility programs
* phonebooks (list of numbers and names)
* alarm clocks
* stopwatches
* video
* games
* camera
* instant messenger
* calendar
* playback of music
* video call capability
* GPS navigation
* viewing TV

Even now as of 2009, the general population uses quite scarce amount of services.
According to a recent research only a mere 1/4 of Finns use mobile services. These
services are considered difficult to install, use, and the payment methods
are suspicous.

The iPhone seems to be one great ice-breaker that has increased network and service
usage considerably. It's largely due to the immediate presentation of these
services in the phone's user interface, and general good usability.

tiistai 7. heinäkuuta 2009

Importance of making information flow

It would be really interesting to read research about the organizational patterns of information flow. I've encountered a place where we administrators have huge problems with getting information. Major events happen beneath our feet, but we feel the force only when the mattress is taken under. There's no communications beforehand whatsoever. And I can tell that it creates stress, uncertainty and lack of belief in the systems. Not just for the administrators, but users also.

I think every sysadmin has a little bit of control freak inside them. The computers are supposed to take orders and execute them ASAP. This is the fact; it's not changed by poor communications. But poor comms can leave you in a situation, where eg. you can't locate the proper computer to give commands at. The host name may not be known to you.

It's the navigation part of the work that suffers. Situations change and even infrastructure changes with time. We need to have information about these changes to be successful. The problem with documentation is that it should be kept to a reasonable amount; but, then again, which bits are the important ones? If you take a tool to investigate (inventory) a system, you gets loads and loads of raw data. Only some of it is probably useful. For the documentation to be human readable, it needs to be hand edited and made more sensible. This takes time and effort, and requires usually some experience about the systems.

IT is a system consisting of people, information, connections, and hardware. All of these are required. The systems are usually made for human "consumption". Basically laptops and desktops exist so that we can handle information in a more efficient way. Computers and networks are only tools to enable our new kind of working.

Every administrator is supposed to have a certain basic skill set. It varies with experience and work history, but let's say for example most Windows admins know how to install and run programs, how to create users on a server, how to make certain security adjustments to the file system, etc. This is basic knowledge.

Virtual and real networks of people are a huge factor in co-operation.

When I'm facing a seeming dead-end, I seek out people to help with the case. It's
often that somebody within the company knows a solution, and has even faced the same issue before. If only this knowledge could be codified in an easy way. Documentation still seeks the form in which it would be as fresh as possible, and the production
of knowledge wouldn't take so much resources. I think currently a lot of documentation projects are considered heavy and even dead as they're born.

Any experiences of using wikis to document an IT infrastructure?