2009-08-27

From ASP to Cloud

10 years ago, ASP (or Application Service Provider) model was the hottest thing in Silicon Valley. All sorts of names were into play - incumbents were soon followed by stalwarts of the industry in delivering software services over the web. Like most fads, it didn't last long, however, it did pave the way for its new avatar - cloud computing.

So what's changed?

For one, The Internet. It's more ubiquitous than ever, albeit in it's broadband format (than the clunky dial-up or ISDN), making it possible for (more) users to have access to services in a similar, and more importantly, acceptable time frame. Thanks to bloatware Operating Systems, we are all used to waiting a few seconds for our apps to start up - and faster Internet access has made it easier for services to be delivered over the web.

Web 2.0 - while the world still tries to figure out how that's different from the web in it's 1.0 incarnation, enhancements in CSS, Ajax calls and better support for Java and JavaScript in browsers make applications in a browser look (and feel) similar to native applications.

Acceptance - put simply, we have all become used to accessing remote services, such as checking our bank accounts. Paying bills online. Every company has a website, and more importantly, an Intranet - a corporate portal with majority of services available over it. This in turn has becalmed the apprehensions of the business user.

Google, Amazon and Salesforce.com - while the last one in the last is the biggest name to survive from the ASP era, the offerings from Google and Amazon have been paving the way for the world to get used to cloud computing. Google App Engine provides you a free quota of services and infrastructure available to you for use (with more available on purchase). Same goes for Amazon EC2.

But really, apart from that, not much has changed. Cloud computing providers are building on the ASP era:

Flexibility and virtualization - most services (or service containers) are provisioned virtually. They could exist on one or more machines, or as dynamic partitions on server (basically, "logical" partitions of a CPU - so small services could use as little as 1/10th of a CPU). Virtualization, is another nebulous term - it could be a Virtual Machine, Full virtualization or hardware-assisted virtualization, hardware virtualization (such as that of memory or storage) or application virtualization. Fundamentally, this ensures the cloud providers do not dedicate resources to one customer or service. But that brings us to the next point. Oh, and virtualization allows for scalability as well.

Multi-tenancy - by definition, cloud computing resources are shared among services offered. Technology available today makes that possible, and more importantly, secure. VMWare virtual machines allow you to run multiple (and different) "guest" operating systems on your "host" machine. Solaris containers (including zones) allow for operating-system level, disjointed user "spaces".

Dynamic provisioning - Though often confused with cloud computing itself, this is installation and configuration of software or even hardware on-demand. Again, technology exists today to activate pre-installed, but not used, CPUs, create partitions (or containers) and configure a workload of choice - all done remotely. And when not required, all of it can be put back into the "available" pool.

Grid computing - Again, sometimes confused with cloud computing, grid computing is the ability to harness powers of more than one computer as a simple computer.

All these factors have changed the face users perceive computing, and made cloud more accessible.

Cloud Computing

One of the biggest buzzwords in the industry today (after web 2.0) is cloud computing. It is typically defined as a type of computing with the following characteristics:

1. Scalable
2. Virtualized
3. Delivered over Internet

Put simply, cloud computing removes the burden of installation, maintenance and to some extent support, of IT services/resources that are required by a business to deliver the business service they specialize in. For example, for a bank, the service for customers to deposit and withdraw money - over the phone, Internet, using ATM machines or at a branch. The underlying infrastructure has to be synchronized, always available, and scale on demand. It has to be protected from attacks, viruses, peaks and troughs in demand, and also updated as required (for example, adding new features, services or products).

Traditionally, a whole new IT sector has spawned since large corporations have had to build in-house IT capabilities to enable delivery of their real services. In fact, some companies had their IT departments open a new avenue of services - for example, British Telecom provides IT on-demand services.

With cloud computing, organizations can "lease" a cloud (or clouds) - much like the Application Service Provider model. But more than that! You can even develop applications in an external cloud - such as the one hosted and provided by Amazon or Yahoo!. This is truly location-unaware service provision and delivery.

However, there are a number of challenges - how do you ensure Quality of Service (QoS)? What about regulatory compliance? There are possibly other legal challenges when services hosted on a remote cloud (say in the US) have to comply with regulations in say, Europe? As ever, security presents its own challenges - priviledged user access as well as data segregation (you wouldn't want your bank records to be visible to your neighbour!).

2009-08-20

mySQL vagaries

So having to mess around with mysql, I picked up a few things of interest:

Stopping and Restarting MySQL Server

To stop your mysql server:

mysqladmin -u root -p
shutdown


To start your mysql server:

mysqld_safe &




2009-08-14

Terminal too wide - vi error

I always used to face this problem while trying to use a maximised Putty window and editing a file with vi:
Finally, I found a solution:

stty columns 120

Works like a charm now!

2009-08-07

How to avoid midnight conf-calls

Ten tips to get a good night's sleep
  1. Train and explain - Show others how to restart servers and services, on production platforms. Not dev, not test and not a replica. Allegedly replica or "reference" platforms have a habit of not being in sync.
  2. Accept Outsourcing models - wake up and smell the cheese. Outsourcing is here to stay. To Vietnam if not India. The sooner you accept and share vital (for keeping systems running) information with them, the better.
  3. Regular Backups - Create a Definitive Software Library, and then have periodic backups. Of database, and filesystem. Including log files.
  4. Test your backups - Backup processes tend to get clunky during restore. Banks do this the best - periodic simulated disasters to test their backups and redundancies. But that's our next point.
  5. Plan redundancy/high-availability - Though strictly not the same, the classification would do for our purpose. It isn't fun to try and bring up a service when there's a power failure in the only data center you use.
  6. Restrict access - Strictly on a need to know basis. Unused accounts should be disabled after a few weeks. User journalling. And Named user accounts.
  7. Scalability testing before rolling out - That change may work well in the teeny development environment. Stress test it before rolling out to production.
  8. Batch your releases - Stack up your releases, so you touch production platform less frequently. Also means you can test them properly, provided there's a reasonable cut-off for inclusion in a batch.
  9. Use multiple platforms - Development and production are two separate, distinct entities.
  10. Have a rollback plan

2009-08-06

Solaris 10 on Vmware - failed attempts

Its official. After 2 days of messing around with various version of VMware (server as well workstation) and Solaris (9 and 10), I've given up trying to install Solaris 10 on VMware.

The problem started when I couldn't get VMware tools installed on my Solaris 10 image, since it couldn't find "svcprop" binary. A system-wide search didn't reveal anything either. Google didn't help much.

So, I upped and d/l a fresh copy of Solaris 10, restarted the process, and this time, I'd get stuck at Grub loader menu. At the time I was using VMware workstation 6, and it was suggested that I upgrade to VMware server, which I did.

Next attempt, same thing:


After fair bit of Googling, I thought, I gotta encourage GRUB to find the kernel, but alas, to no avail:

So I've given up on this, but if anyone finds a solution - please do let me know!



2009-08-05

Citation Management tools

When I started writing out my thesis last time around, I ended up experimenting with various tools, open source as well as commercial:

  1. JSTOR - allows you to export citations to bibliographic software
  2. EndNote - probably the best commercial software in the category.


However, I didn't quite get to grips with working in a RTF file that EndNote wanted and then trying to manage my citations that way. I wanted a utility that could store my bibliography database, and produce a list of references that I had actually referenced.

Enter, Zotero. Probably the best tool in the category, and what's more, its open source! It has a fire-fox add-on, as well as MS Word plugin. You can create a bibliography database while browsing (using Firefox), export your database and import it into the Word plugin, and merrily produce your work. As you reference actual items, it keeps track of all references, and you can produce a Reference list, in the style of your choice!

However, I now hear this feature is available in Word 2007 as well, however, I haven't had a chance to play with it yet.


2009-08-04

Proliferation of passwords

We all have passwords for something or the other; more so in today's world. Currently, I have the following passwords:

  • Gmail and hotmail accounts, and for Yahoo Flickr account
  • The AT&T VPN client to login to the corporate work network
  • Lotus notes passwords to log into ..well Notes, to view email
  • Corporate Intranet
  • SameTime for corporate instant messaging (which is thankfully the same as the Corporate Intranet password)
  • Online banking
  • Login to my backup NAS drive at home
  • Login to the corporate backup systems
  • Login to my blackberry
Typically, different systems tend to make you remember arcane bits of information. For example, online banking systems typically want you to remember a 4 digit code (which no sane person keeps the same as their PIN)

Additionally, all systems have different password expiration policy - my corporate systems have an 8 digit password, which should contain numbers and letters in lower and uppercase, expire every 2 months, and the password cannot be same as last 3 passwords. No kidding! Compared to that, the online systems are faily forgiving - you can have the same password all your life for say, hotmail, and it won't complain!

I haven't tried any password management systems yet, but I'm happy to take recommendations. My current mechanism is to "memorize" all passwords :-)

Developing an intelligent framework to play finite-information board games

Computer or video games employ artificial intelligence to provide more interesting and challenging experience. Most AI is achieved by pre-scripted routines. However, designing programs that can play real-life games with human-like ability are a growing area of interest, since the better we become at designing such programs, the more we can learn to apply learning tactics in machines.

The meta-game framework developed by Barney Pell eliminates human bias in learning by generalizing the input it accepts to the rules of a game with a pre-defined class, and designing programs to play them.

Barney Pell uses the concept of “Advisors” – first used by Susan Epstein in HOYLE[1], a key component of his metagame framework derived from extensive human analysis of the class of games, as well as well as expert knowledge. Advisors are resource-limited hierarchical procedures which attempt to compute a decision based on correct knowledge, shallow search and inference.

Although the meta-game framework does a very good job of removing bias, it is limited in its capabilities to search efficiently in a reasonable amount of time to truly demonstrate the use of AI in game-play, because:

1. Manually assigned “weights” to Advisors - In his design of the metagame framework, Pell assigned the weights manually to advisors, via deduction or analysis of the type of game advisor. Even though these advisors are developed for a class of games, there is some potential of bias or human influence creeping in, since the weights assigned to each advisor is potentially the importance the observer places on a certain strategy. For example, “capturing a piece” may be rated higher than “move pawn to convert to King”, in the game of chess.
2. Static Advisors – Both the metagame framework and Hoyle employ pre-determined, “static” advisors, developed with careful human analysis. Such advisors will offer meaningful advice only to games already known, or those that have been used to test the model.

If anyone ever manages to find a working copy of Pell's code, please contact me. This is something I'd be very keen on, however, my search for the code hasn't had much success!

References:
  1. HOYLE
  2. A Strategic Metagame Player for General Chess-Like Games
  3. Entertainment Software Association – Sales & Research Data
  4. Game theory
  5. The metagame project
  6. SAL
  7. A comparison of human and computer game playing
  8. Barney Pell papers FTP site
  9. Nici Schraudolph's go networks
  10. Towards an ideal trainer