Sunday, December 30, 2012

Guaranteed Integrity of Messages

The ability to guarantee the integrity of a document and the authentication of the sender has been highly desirable since the beginning of human civilization. Even today, we are constantly challenged for authentication in the form of picture identification, personal hand signature and finger prints. Organizations need to ensure authentication of the individual and other corporations before they conduct business transactions with them.

When human contact is not possible, the challenge of authentication and consequently authorization increases. Encryption technologies, especially public-key cryptography provide a reliable way to digitally sign documents. In today’s digital economies and global networks digital signatures play a vital role in information security.

Sunday, December 23, 2012

Security–the most important Quality Attribute

While digital signatures and encryption are old technologies, their importance is renewed with the rapid growth of the Internet. Online business transactions have been growing at a rapid pace. More and more money transactions occur electronically and over the Internet. Non-repudiation is important when personal contact is not possible. Digital signatures serve that purpose. Encryption ensures that information sent for the intended party can only be read, unaltered by that party. Several technologies support encryption.

The enterprise security model consists of domains that get protection from resources not permitted to access or execute functions. There is a clear distinction between authorizing a resource and authenticating a resource. When a person shows a driver’s license at the bar before he gets a drink, the bar tender will look at it and compare his photograph with the actual person presenting it. This is authentication. When he checks the date of birth for legal drinking age, he has authorized the requester for the drink.

In the corporate environment, it is exceedingly important that the same form of authentication and authorization take place digitally. With new business channels open on the Internet, web applications deployed on the intranet for employees, and business-to-business (B2B) commerce channels created on the extranet, millions of dollars worth of transactions occur.

Business critical information is passed on the wire between computers, which if exposed to the general public or in the wrong hands could be disastrous to the company in question. For every business that exists there is a threat to the business. For e-business initiatives the anonymity of the network, especially the Internet, brings new threats to information exchange. It is important that information is exchanged secretly and confidently.

DSV and Custody Chaining

Dynamic signature verification (DSV) is the process by which an individual’s signature is authenticated against a known signature pattern. Dynamics of the process of creating a signature is initially enrolled into the authenticating system, which is then used to compare the future signature patterns. Several factors including speed, pressure, acceleration, velocity and size ratios are taken into account. These measurements are then digitized and stored for comparison later.
Signatures have long been used to authenticate documents in the real world, before the technology wave, signatures, seals and tamper-proof envelopes were used for secure and valid message exchange. With the onset of technology and digital document interchange, a growing need for authenticating digital documents has emerged.
Digital signatures had emerged in the 1970s as a means of developing a cipher of fixed length from an input of theoretically unlimited length. The signature is expected to be collision free and computationally infeasible to reverse into the original document. Both handwritten signatures and digital signatures have to comply with the basic requirements of authenticity, integrity, and non-repudiation (Elliott, Sickler, Kukula & Modi, n.d.).
In the information technology departments of corporations, documents are regularly exchanged between teams, companies, out sourced contract workers, internal consultants and executive management. These documents are often confidential and contain company secrets. However, due to resource constraints such documents are often shared with consultants and contract workers.

It is therefore a viable solution to provide digital signatures on those documents using proper authentication protocols. One way this could be achieved would be through dynamic signature verification. An interface that can create unique digital signatures from the physical dynamic signature and apply it to the electronic document would be ideal.
The requirement of a verifiable trusted signature creation technique for enterprise-wide document collaboration is required. DSV is an ideal technology suited for this purpose. Sensitive documents can be signed using a DSV module which can electronically sign the e-document. The document can be then shared with confidence that it has not been altered in transit and the recipient will be able to trust it.





Sunday, December 16, 2012

Fingerprinting and Biometrics at Airports

I was unpleasantly surprised to see longer than usual lines at the international port of entry at O’Hare this February. My flight connected me to O’Hare International at Chicago from Schiphol Airport at Amsterdam, Netherlands. It was a long flight and it wasn’t apparent to me the reason for the delay in processing passengers. A huge line of people with hand luggage zigzagged what appeared to be a large hall, the end of the line fading in the distance. I was tired and wanted to get to my apartment and I did not believe I would ever get there at this rate.

In a 2004 article published on New Scientist, Will Knight reports that the Department of Homeland Security (DHS) initiated the installation of a fingerprinting system. A total of 115 airports have the biometric security equipment installed (Knight, 2004).A DHS officer made the comment to Knight that “it takes each finger scan takes just three seconds and pilot schemes produced just one error in every thousand checks” (Knight, 2004).

http://eyetrackingupdate.com/wp-content/uploads/2011/02/digital-fingerprint-scan-300x300.jpg

The early morning long lines brought back memories of the traditional waits outside the U.S. Consulate general in India where the visas are issued. It is said that heat, rain nor storms get in the way of ticket seekers to paradise itself – the United States of America. Visa applicants are happy to divulge their fingerprint for an entry permit into the USA.

Knight (2004) cites Bruce Schneier, founder of the US security consultancy firm Counterpane, who believes that gathering more information through this method is only collecting more data while the problem with security lays in a lack of intelligence not the amount of data . Schneier believes that there is enough data already available but not enough intelligence to process it. He goes on to explain that the terrorists who crashed airplanes into buildings on September 11,2001 had valid passports and were not on previous terrorist watch-lists.

The U.S. immigration officer asked me to wet my left and right index finger and place it on the fingerprint sensor, just like the Visa officer had asked me to do in India. The visa had been issued at the end of the day – a very long day. There was a camera placed along with the fingerprint sensor. No pictures were taken in either place. I placed my finger, the immigration officer instructed me to wait. The computer system looked up my fingerprint compared it with their databases in what seemed like an eternity. Finally, the immigration office smiled back at me and let me proceed. I still had to go to baggage collection and customs; I feared more divulgence of impressions from body parts. Thankfully there were none. After ninety minutes of baby-steps through the immigration lines and multi-finger scans at the Chicago O’Hare airport I was free to step into the “Land of the free, home of the brave”.

Sunday, December 9, 2012

SOA 2004–a blast from the past or what I thought about it back then

I wrote up some views on Service Oriented Architecture in 2004. This was a time when XML was a buzzword and people were wondering and writing about SOA. I was implementing a leading edge solution for a policy administration system using an ACORD XML interface and hosting Internet B2B services for independent agencies. A soup to nuts solution that included XML, SOAP, WSDL, Java EE, EJB and RDBMS + COTS.

I also wrote this unpublished paper:

Introduction

This is the most important decade for distributed computing. Reuse and interoperability are back in a big way for distributed applications. Over the years, several types of reuse methodologies have been proposed and implemented with little success: procedure reuse, object reuse, component reuse, design reuse etc. None of the methodologies tackled interoperable reuse. Enter web-services. Web-services are big and everyone in the industry is taking this seriously. Web services are reusable services based on industry-wide standards. This is significant because it could very well be the silver bullet for software reuse. Software can now be reused via web services and applications can be built leveraging Service Oriented Architectures. This paper relates Service Oriented Architectures and highlights its significance and relationship to web-services.

Distributed Software Applications

Software applications deployed across several servers and connected via a network are called distributed applications. Web-services promise to connect such applications even when they may be deployed across disparate platforms in a heterogeneous application landscape. Cross-platform capabilities are one of web-service’s key attractions because interoperability has been a dream of the distributed-computing community for years (Vaughan-Nichols, Steven J.). In the past, distributed computing was complex and clunky. Previous standards like CORBA (Common Object Request Broker Architecture), RMI (Remote Method Invocation), XML-RPC (Extensible Markup Language – Remote Procedure Calls), and IIOP (Internet Inter-ORB Protocol) were used for distributed applications and information interchange; these were not based on strict standards.

Sun Microsystems’s RMI (Remote Method Invocation) over JRMP (Java Remote Method Protocol) was the next revolution of distributed computing. JRMP required both client and server to have a JRE(Java Runtime Environment) installed. It provided DGC (Distributed Garbage Collection) and advanced connection management. With the release of its J2EE specification, Sun introduced EJBs (Enterprise JavaBeans). EJBs promised to support both RMI over JRMP and CORBA IDL (Integrated Development Language) over IIOP (Internet Inter-ORB Protocol). Distribution of these beans (read objects) and transaction management across topologies seemed to be a blue sky dream that never materialized. In addition, the J2EE standard was not envisioned to be a truly enterprise standard – in the sense that integration with other object oriented platforms was not “graceful”. Microsoft introduced .Net and C# that directly compete with J2EE and Java. The continued disengagement between these two major platforms has reached its threshold. It has became imperative that there be a common cross-platform cross-vendor standard for interoperability of business services. Web-services seem to have bridged the gap in the distributed computing space that no other technology has in the past: standardize the interoperability space.

Dublin Core Metadata Glossary defines interoperability as:

The ability of different types of computers, networks, operating systems, and applications to work together effectively, without prior communication, in order to exchange information in a useful and meaningful manner. There are three aspects of interoperability: semantic, structural and syntactical.

Vaughan-Nichols (2002) states that web-services enables interoperability via a set of open standards, which distinguishes it from previous network services such as CORBA’s Internet Inter-ORB Protocol (IIOP).

Web Services

The word “service” conjures up different connotations to different audiences. We need to understand what a service is not. One damaging assumption for service is that it is another term for component(Perrey & Lycett, 2004). Component-orientation, object-orientation and integration based architectures are in the same space and are often a source of confusion.

Service-Architecture defines a service: “A service is a function that is well-defined, self-contained, and does not depend on the context or state of other services.” Perret and Lycett, attempt to define “service” by unifying its usage context by business, technical, provider and consumer. They describe and contrast multiple perspectives on “service” in detail. “The concept of perspective is the key to reconciling the different understandings of service. Business participants view a service (read business service) as a unit of transaction, described in a contract, and fulfilled by the business infrastructure.” They contrast this with the technical participant’s perception of a service as a “unit of functionality with the semantics of service described as s form of interface”. The authors go on to define a service: “Service is functionality encapsulated and abstracted from context”. They argue that the contrasting perceptions of services are really not an issue as long as there is commonality in the underlying perception. The commonality seems to lie in the reuse of services.

“Web services can be characterized as self-contained, modular applications that can be described, published, located and invoked over a common Web-based infrastructure which is defined by open standards.” (Zimmermann, Milinski, Craes, & Oellermann, 2004)

The Web Service Architecture

We are on the cusp of building “plug-compatible” software components that will reduce the cost of software systems at the same time increase their capabilities (Barry, 2003). Applications can be built on architectures which leverage these services. The goal is for service-oriented architectures to be decoupled for the very services it invokes.

Service-oriented architecture leverages the interoperability of web-services to make distributed software reusable.

Web-services makes the process more abstract than object request brokers by delivering an entire external service without users having to worry about moving between internal code blocks(Vaughan-Nichols, 2002). A recent Yankee Group survey results showed that three out of four enterprise buyers plan on investing in SOA (Service-oriented Architecture) technology within one year(Systinet, 2004).

Interoperability is driven by standards, specifications and their adoption. A service operates under a contract or agreement which will set expectations, and a particular ontological standpoint that influences its semantics (Perrey & Lycett, 2003). Applications that expose business processes with web-services are simpler to invoke and reuse by other applications because of pre-defined contracts that the service publishes. Web-services are interoperable and service-oriented architecture enables reuse, as a result SOA and web-service have formed a natural alliance(Systinet, 2004).

The collection of web-service specifications enables a consortium of vendors with their own underlying implementations of these standards to compete viably in the reuse and interoperability market. This is good because the competition is limited to the implementation level as opposed to the standards-level. Vendors will enable a compliant-based marketplace for distributed applications which expose web-services. This would enable SOA-based web-services to consistently search and leverage services in a business domain, via well-known public, private or protected registries, that are compliant with these standards.

Practitioners have used web-services for interoperability successfully in large systems:

“To achieve true interoperability between Microsoft (MS) Office™/.NET™ and Java™, and to implement more than 500 Web service providers in a short time frame were two of the most important issues that had to be solved. The current, second release of this solution complies with the Web Services Interoperability (WS-I) Basic Profile 1.0. Leveraging the Basic Profile reduced the development and testing efforts significantly” (Vaughan-Nichols, 2002).

The Communication Protocol

While web-services are primarily meant to communicate over HTTP (Hyper Text Transfer Protocol) they can communicate over other protocols as well. SOAP (not an acronym) popularly misrepresented as an object-access protocol is the primary message exchange paradigm for web-services. SOAP is fundamentally a stateless, one-way message exchange paradigm(W3C, 2004).

Interoperability is driven by standards, specifications and their adoption. True interoperability between platforms is achieved via SOAP (Zimmermann et al., 2004). Web services are interoperable and service-oriented architecture enables their interoperability. Interoperable protocol binding specifications for exchanging SOAP messages are inherent to web-services(W3C, 2004).

The collection of specifications enables a pool of vendors with their own implementation of these standards. This is good because the competition is limited to the implementation level as opposed to the standards-level. WS standards compliant vendors will enable a compliant based marketplace for distribute applications which would greatly support service oriented architectures. This would enable SOAs to consistently search and leverage services in a domain that are compliant with these standards.

The Description Language and Registry

While WSDL (Web Service Description Language) describes a service, a registry is a place where the location of WSDLs can be searched. There are two primary models for web-services registry (SunMicrosystems, 2003). UDDI and ebXML each target a specific information space. While UDDI focuses more on technical aspects when listing the service, ebXML focuses on business aspects more. In a nutshell, SOAP, WSDL and UDDI fall short in their abilities to automate ad-hoc B2B relationships and associated transactions. None are qualified to address the standardization of business processes, such as procurement process (SunMicrosystems, 2003).

The initial intent of UDDI was to create a set of public service directories that would enable and fuel the growth of B2B electronic commerce. Since the initial release of the UDDI specification, only a few public UDDI registries have been created. These registries are primarily used as test beds for web service developers.

Conclusion

Web-services in combination with service-oriented architecture have bridged the interoperability gap in the distributed computing space unlike any other technology in the past. Service-oriented architecture and web-services are a paradigm shift in the interoperability space because they are based on industry accepted standards and are simpler to implement across disparate software deployments. This technology is certainly here to stay.

Sunday, December 2, 2012

Speech Recognition in Automobiles

I wrote this in 2004 when I purchased a car with Voice Activated controls. It was amazing back then.

Speech Recognition in Automobiles

I am alone in my car cruising from Carmel, Indiana to Purdue University in West Lafayette, Indiana for a weekend class. It’s early in the morning and I wonder if I will make it to class on time. After about ten minutes on interstate 65, I ask impatiently “How long to the destination?” Honda’s advanced navigation system gears into action; it promptly queries the Global Positioning System (GPS) satellites and local GPS repeaters for the vehicle’s current co-ordinates. It then averages out the expected speed based on current averages on the interstate, state roads and inner streets and responds back in a pleasant natural female voice “It is about forty two minutes to the destination”. I am definitely going to be late for class.

Speech recognition technology, once a domain of fantastic science fiction, is a reality today. This technology has begun to touch our lives on a daily basis in our automobiles. A recent article (Rosencrance, 2004) reports on the speech recognition technology in Honda automobiles. The system has the ability to take drivers’ voice commands for directions and then respond with “voice-guided turn-by-turn instructions, so they don't have to take their hands off the wheel” (Rosencrance, 2004), said Alistair Rennie, vice president of sales for IBM's pervasive computing division. Rennie added that this “goes significantly beyond what was done before in terms of being able to deliver an integrated speech experience in a car” (Rosencrance, 2004).

Using IBM's Embedded ViaVoice software the system can recognize spoken street and city names across the continental United States (Rosencrance, 2004). The system recognizes almost every task a driver may want to accomplish while on the road. Commands that can operate the radio, compact disk (CD) player, climate control, defrost systems. It can recognize more than 700 commands and 1.7 million streets and city names. All this is possible without the driver looking away from the road.

clip_image002

(Figure 1)

“Display on” I prod along. The in-dash LCD screen lights up (see Figure 1). I glance at it for a second – there is a map of the state of Indiana and a symbol inching up north towards the destination - a red bull’s eye on the electronic map. I will get there soon. I say “XM Radio Channel twenty”. The integrated satellite radio starts up and plays high quality music.

Automobiles that leverage speech recognition technology are not only making vehicles more attractive to car buyers but also make the roads safer by allowing the driver to never have to take their eyes off the vehicles. Research conducted by the National Highway Traffic Safety Administration (NHTSA) found that automatic speech recognition (ASR) systems distracted drivers less than graphical user interfaces in vehicles performing the same function (Lee, Caven, Haake & Brown, n.d).

Not before long the speech system fades down the music volume and then articulates in the same pleasant voice “Exit approaching in two miles – stay to the right”. The ‘exit mile countdown’ goes on every half a mile until the car actually takes the exit. In about ten minutes I pull into the parking lot. I am running late by ten minutes – the class has probably begun and the exam papers probably handed out to the cohort. Before I turn off the engine, I finally ask, “Will I make a good grade?” There is no response from the system this time.

Sunday, November 25, 2012

Iris Recognition–Identity & Authentication through Biometrics

I did some research at the technology lab at the Purdue University West Lafayette Campus in 2004 and wrote this paper:

Poets have romanticized the eyes over centuries. The beauty of the eyes has been voiced through civilizations. Many songs have been written about them, all aspects of the eyes have been glamorized. The Egyptians ceremoniously decorated the eyes, the Hindus apply kajal to highlight the eye contours, and the western civilizations called eyes the window to the soul.

Researchers today look at the eyes with from a different perspective. One such researcher is Richard P. Wildes. He looks at human eyes to identify people. The biometric based technology is called iris recognition. It is suggested that the iris is as distinct as a fingerprint or the patterns of retinal blood vessel.

In the 1997 article, Wildes investigates the iris relative to uniqueness and identification. He explains the structure of the eye and what makes the iris unique and identifiable with repeatability. He states:

The iris is composed of several layers. Its posterior surface consists of heavily pigmented epithelial cells that make it light tight (i.e. impenetrable by light). Anterior to this layer are two cooperative muscles for controlling the pupil. Next is the stromal layer, consisting of collagenous connective tissue in arch-like processes. Coursing through this layer are radially arranged corkscrew-like blood vessels. The most anterior layer is the anterior border layer, differing from the stroma in being more densely packed, especially with individual pigment cells called chromataphores.

The multi-layer structure of the iris together gives it the unique appearance. Wildes (1997) cites literature which states that the appearance of the iris does not change with the age of the individual. This is important because repeatability is guaranteed across longitudinal sections of studies and research. Also there is guarantee that the iris will remain relatively unchanged for recognition purposes.

Wildes (1997) explains the process of image acquisition, iris localization, pattern matching and recapitulation. He states that the “major challenges of automated iris recognition is to capture a high-quality image of the iris while remaining noninvasive to the human” (Wildes, 1997, p. 1351). He explains the problems associated with iris illumination when the acquisition of the image is to take place and compares the illumination approach adopted by Daugman, LED-based, with his own which is a diffuse source and polarization in conjunction with a low-light camera (Wildes, 1997, p. 1353).

Wildes (1997) also explains the issues with iris localization, he states:

It is necessary to localize that portion of the image derived from inside the limbus (the border between the sclera and the iris) and outside the pupil. Further, if the eyelids are occluding part of the iris, then only that portion of the image below the upper eyelid and above the lower eyelid should be included. Typically, the limbic boundary is imaged with high contrast, owing to the sharp change in eye pigmentation that it marks. The upper and lower portions of this boundary, however, can be occluded by the eyelids. The pupillary boundary can be far less well defined. The image contrast between a heavily pigmented iris and its pupil can be quite small.

Wildes (1997) explains further, that while the pupil typically is darker than the iris, the reverse relationship can hold in cases of cataract. He talks about the eyelid contrast as a variable which depends on the relative pigmentation in the skin and the iris. The irregularities of the eyelid boundary due to eyelashes can cause difficulty in localization of the iris. It is obvious that the localization of the iris is no simple matter. There are lots of variations in the process of capturing the iris.

The iris recognition system proposed by Wildes (1997) uses four goodness-of-match measurements that are calculated by the previous stage of processing. It becomes possible to conduct 1:1 comparison and 1:N matching. It is possible to enroll an individual without invasion; just a video camera can capture enough detail to recognize the individual under scrutiny or observation.

It is quite amazing that biometrics of the eye have enabled automated systems that realize the potential of a century old suspicion that the iris can be used to recognize human beings. Wildes (1997) has contributed significantly to the field of iris recognition and moved the field of biometrics one step further.

Saturday, November 10, 2012

COTS or FOSS for Emerging Economies

Introduction

I do not agree with any position that suggests open source software is an attractive option for emerging technologies. Although emerging economies may choose to adopt open source software, the primary driver for that adoption is not free software. The cost differential of open source alternatives to available commercial alternatives is not significant enough to affect national economies or drive decision of corporations in those economies.
In this paper I explore the definition of open source software, its primary drivers for adoption in industry and the open source business model. I hypothesize that open source software is a collaborative software development model that owes its success to quality, security, openness and extensibility but not low price alone. Also, I augment to the hypothesis: open source is not targeted for emerging economies or markets alone; rather open source is targeted at the whole world and any adopter.


Open Source and the Internet
This is the beginning of a historical era in collaborative development. The minds of top software developers are converging on global digital networks to produce high quality software for free.
The Internet has created synergies of geographically dispersed minds to collaborate and develop open source software (Jesiek 2003). And they distribute the software and its source code for free. The Internet has made open source collaboration and distribution fast and easy. In a September 2003 article, Jesiek expands “proliferation of computing technologies and the concomitant growth of global information and communication networks are very significant historical movements”.
Open source is guided by the motivations, creativity, and desire of the software contributor. It is a product of community culture. It is a movement that is technical, political and sociological. The movement is not confined to a limited group of products or people but is rich in breadth and depth; it is a treasure chest teeming with technologies and best-practice methods (Gustafson, Koff n.d.).
The Open Source Community
“The basic idea behind open source is very simple: When programmers can read, redistribute, and modify the source code for a piece of software, the software evolves. People improve it, people adapt it, (and) people fix bugs. And this can happen at a speed that, if one is used to the slow pace of conventional software development, seems astonishing.” (opensource.org)
A survey by Boston Consulting Group in 2002 of developers using SourceForge found that respondents were, on average, 30 years old and had 11 years of programming experience. These were experienced professionals contributing to quality software products for free. What motivated them to do this?
Community credibility is an underlying motivator for joining an open source project. The lure of open source includes solving technical challenges; drawing of making a contribution the rest of the community can use; the enhanced skills and reputation (marketability) that comes from being an active member of the community; and the potential for providing fee-based services for open source software. Developers are motivated by the opportunity to branch out and work with products they don’t normally work with in their day jobs – say, video programming – and they are also motivated by pure fun (Gustafson and Koff).
Open Source versus Commercial Software

Open source is being adopted by developed nations and corporations at a greater pace than developing economies. Organizations of all kinds are consciously adopting open source software for critical business needs: Deutsche Börse Group, Deutsche Bank, the Danish government, BlueScope Steel, NASA, the Associated Press, J.P. Morgan Chase and Google.
There have been many government initiatives around open source software, as governments in Brazil, China, India, Korea, Japan, Europe, Australia and the United States, as well as the United Nations, considers open source policy and options. And large information technology vendors such as IBM, Intel, Hewlett-
Packard, Oracle, SAP, Sun Microsystems and Dell are supporting open source (Gustafson, Koff n.d.).

What is the catch? Like all software – open source too has its costs. Maintenance and support costs are left to the adopter to absorb. Koch (2003) elaborates, just because you download open-source applications for free doesn't mean you won't have a whole host of associated costs such as maintenance, integration and support.
Since open source software can be traded in markets just like any other kind of artifact one cannot definitely tag open source software as having zero price, explain Scacchi (2003). Programmers often explain this seemingly incongruity with simple shorthand: when you hear the term “free” software, think “free speech” not “free beer”; or ‘software libre’ not ‘software gratis’.

Adopters must be able to bare the hidden costs associated with open source software. The success of open source software is surprisingly not attributed to its zero monitory cost of purchase. Schadler (2004) attributes the success of open source to high availability, self-training opportunity, and support. He contrasts this with commercial software and underlines the non-availability of software and self-training.

Not only emerging economies, but all types of economies and corporations may adopt open source software. Just as free speech is not intended primarily for oppressed dictatorships, in the same way open source is not intended for poor or developing nations and economies. Although open source is free, it is not free of obligations and lack of guaranteed support. This makes it less attractive for emerging economies.
The fact that open source software is free can be confusing to skeptics and adopters. Scacchi (2003) explains the meaning of “free” in open source software. He elucidates that “Proprietary source code is the touchstone of the conventional intellectual property regime for computer software. Proprietary source code is supposed to be the fundamental reason why Microsoft can sell Windows for around $100 or why Oracle can sell its System 8 data management software for many thousands of dollars”. Open source software process “inverts this logic” (Scacchi 2003).It differs from commercial software in one fundamental aspect – source code is distributed with the runtime binaries of open source products. All documentation, source code and the runtime binaries are provided by the development community for free.
Licensing can be tricky for smaller companies who are vulnerable to lawsuits through lack of indemnity in open source products. The “as-is” aspect of open source software is risky. There is a possibility that part of open source software “copied code” from some other licensed product. It is very difficult for the companies to identify or compare open source with licensed software products to identify theft. This exposes the company using open source software to lawsuits from companies claiming that the open source software violates their intellectual property rights. New markets and emerging economies should take note of this risk.



Open source success

Open source software will disrupt commercial software markets with low-cost, good-enough components (Schadler 2004). But low cost of open source is not the primary driver; the combined value proposition of zero cost of open source in conjunction with good quality software makes open source a global movement of such proportion. Software developers strive for heroism, and a few have attained cult status.
"Heroism in these communities means proposing an interesting improvement and getting everyone to acknowledge it," says John Sarsgard, vice president of Linux sales programs at IBM in Armonk, N.Y. "That's the way these guys get their strokes—everyone recognizes that their way of doing it is the best way." When security bugs are revealed in Linux or Apache, for example, the community begins posting fixes on the Internet within hours.
At first glance, open source and security appear to be an oxymoron, but in fact they are highly compatible: the very openness of the software ensures rigorous review and testing, bolstering security. "The open source community is in a better position to provide secure code than proprietary vendors because there are so many people reviewing the code.” Jason Arnold, program manager of CSC’s H.E.A.T. security product.
King (2004) reports that weather.com site serves more than 50 million pages on stormy days, and it runs almost entirely on open-source software and commodity hardware. The Atlanta-based Web site’s adoption of a new architecture and open source products “has slashed IT costs by one-third and increased Web site processing capacity by 30%” (King 2004). However cost slashing was not their primary goal of switching to an open source product.
The quality of open source products was its main “selling” point. King (2004) describes Weather.com’s transition from IBM’s server software product to open source Apache Tomcat to run their website. There were several problems that the team encountered with IBM WebSphere. Performance and scalability issues were cited as the main reasons for switching to Apache’s web server.
The team switched from IBM’s commercial offering to Apache’s open source implementation primarily for its quality. Apache’s open source web servers today host 68% of web servers in the world according to an August 2004 analysis of Netcraft (Gustafson, Koff).
There is a general trend of major corporations switching to open source. In a November 2002 CIO survey of 375 information executives, 54 percent said that within five years open source would be their dominant server platform (Koch 2003). Not cost, but openness, security and quality seem to be the primary drivers of adoption.
Adoption of Open source

In a survey of over 500 development managers conducted in December 2003, 61% expected a savings of less than 10% if they switched to open source Linux (Evans Data Corporation 2004). Adoption of open source technologies differs from the adoption process of commercial software. Open source application components and commercial application components differ dramatically when it comes to benefits, decision process, and challenges (see Figure 1).

Figure 1
Adoption of open source technologies is a critical function of an IT shop – these decisions need to be made by highly trained and qualified individuals, not usually part of emerging economies.
The operating systems market has seen a rise in adoption of open source operating systems – namely Linux. Experts claim that Linux is a threat to the dominant Windows operating system even though Windows is much more user friendly than is Linux. Microsoft’s own history with Apple demonstrates that “good enough” cheap technology with broad market support can win over a superior technology. And in this case, good enough and cheap is Linux – not Windows. (Schadler 2003).
Users of the Linux operating system are generally highly skilled individuals. They are usually expensive to hire. Such skilled labor is hard to find and hire in emerging economies ort small companies. However, for developed economies Linux is a good choice because funds for training are generally available and it is easier to find and hire such talent. Most computer related jobs can be filled with lower paid, "windows
operators" rather than skilled labor or specialists (Woollet 2004).

Innovation is a key factor for corporations to look outside its own walls. Galli explains in a recent 2004 article that the primary driver for Sun to release Solaris (based on SCO’s Unix kernel) as open source is to leverage the large community of developers to further innovate its own product. Sun's goal is to use the open-sourcing of Solaris to drive a turnaround of the company's software business, which has lost mind share, if not market share, in the Linux and Windows crossfire.
The combined intellectual power of the open source community is unparalleled by any corporation. Sun recognizes this and wants to foster a better internal software development process, work more closely with the community and then be able to drive innovation outside its own walls, increasing Solaris' penetration and pushing it into new markets, executives said (Galli 2004).
Other software vendors watch nervously as open source components like JBoss and MySQL move into their core markets as well. These same vendors also use open source as a weapon for disrupting markets that they don't control. For example, SAP supports MySQL precisely because it wants to commoditize the database tier, and IBM supports Eclipse precisely because it doesn't dominate the developer tools market (Schadler 2004). Hewlett-Packard's agreement with MySQL and JBoss to certify, support and jointly sell their open-source software solutions will broaden interest in open source (Natis, Weiss, Strange, Feinberg. June 2003). Most major software players have found vested interests in supporting open source software.

The Microsoft Approach

Still feeling the pressure from Linux and other open-source software competitors, Microsoft Corp. is reaching out further to the open-source community with offers of joint development and testing. But it's not yet clear if anyone is ready to listen. Josh Ledgard, a program manager on Microsoft's Visual Studio community team, wrote on his ‘blog’ that he is working to enable more collaboration of the open-source type with the developer community and Microsoft (Galli Aug 2004).
Microsoft Corp. says it is looking to turn over more of its programs to open-source software developers, playing a greater role in a process that the Redmond-based company has criticized strongly at times in the past.
Money-makers like the Windows operating system and Office productivity suite aren't on the table. But the company has so far released two software-development tools to the open-source community, and it wants to continue the practice, a Microsoft platform manager told an industry group this week (Bishop 2004). While Microsoft is testing the waters it is not clear whether the open source community is ready to embrace the monopoly.
Conclusion

The motivation for developing open source software is not to provide free software to poor nations or be embraced by new markets– it is a techno-sociological movement that has no specific intended audience or adopter.
There is nothing that stops emerging economies to adopt open source, however open source comes with standard software maintenance costs, demand steep learning curves, and require bright (and highly paid) knowledge workers, and comes with no warranty or indemnity. This makes open source adoption by emerging economies not as attractive as it seemed at first glance.



























































Thursday, November 8, 2012

Windows 8 – For Programmers

I installed the Windows 8 32-bit operating system on my 5 year old T61 with 4GB RAM.

I upgraded it from Windows XP that I had for a couple years because I could not use Ubuntu anymore due to Netflix and iTunes not being interoperable.

The Live Tiles may be ok for a tablet with touch screen – but it is not good for programmers who are usually “keyboarders”.

You can of course disable Metro:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer : RPEnabled=0

will disable it.

However, I wanted to see if I could continue to use Metro and still have the speed and resource utilization of XP.

So the first thing I did was “optimize for performance” – this can be done from My Computer>Properties>Advanced .. > Performance

Next, I made the Windows key my friend. So Hitting the Windows Key gets you in in and out of tiles much faster than a “swipe” of the mouse etc.

 

Windows 8 Keyboard Shortcuts

http://howto.cnet.com/8301-11310_39-57390299-285/23-new-keyboard-shortcuts-for-windows-8/

A great blog: http://blogs.windows.com/windows/b/windowsexperience/archive/2012/03/08/getting-around-in-windows-8.aspx

http://sdrv.ms/xjdwY7 (Shortcut to SkyDrive link for PDF of shortcuts)

image

On another note – the whole Horizontal Scrolling of Windows Apps will be a nightmare for usability folks who have built applications to flow north-south. I like change – let’s see how others feel after a year or so.

Sunday, November 4, 2012

Learning to let Employees Lead

I was give this book to read by my manager several years ago, and since then I have read a lot of books on team leadership etc. This book was one of the simpler reads.

To summarize this book, the authors take an approach that is akin to the live and learn approach, learn by your mistakes approach, and a generalize by personal experience approach. I wrote in 2006: Belasco and Stayer have written an oddly titled best-selling book in first person based on these principles of leadership. Flight of the buffalo (FOTB ) is a joint venture that dives head first into experiences of running companies, heuristics of leadership, visual analogies, gut feel, earthly common sense and best practices of "leadership". The oddity of the title is explained early in the book.

The book begins with the authors' journeys into leadership and various related concepts & ideas. Amongst others ideas like intellectual capitalism, leadership vision, focus, direction, obstacles (removing them), developing ownership in employees, self-directed action & learning to be the leader (lead goose) are discussed. Every chapter is littered with short stories and a moral. There is an Aesop's Fables like feel to the book. Real-life examples are touching and real, however, if you have read Northouse's LTP previously, you can draw parallels to Belasco and Stayer's experiences. Specific leadership theories presented in LTP can easily experience the wise words the authors present. The book is enjoyable, and almost actionable. Some of the advice is basically common sense best practices in action. The writing style is patterned by "try,try again until you succeed, or decide to do things differently".

The authors introduce interesting words and concepts. I liked the word authors invented to mean the inverse of leadership - "status-quo-ship". Another favorite is the concept of "lead goose" in the "intellectual capitalism era". Good advice is provided on every page of the book, obvious common sense is prescribed often. For example, "Leaders proact, not react" is treated as a chapter, the basic premise is that leaders should prevent problems rather than solve problems. Basic management tenets are also provided for the uninitiated, Deliverable (What will be delivered ?), Measurement (How will we know it is done ?), Date (When will it be done ?), Person Responsible (Who will do it ?). The authors recommend that every employee do a process analysis by asking "what can i stop doing?" - remove obstacles. Expectation setting on staff, customers and oneself is discussed. Henry Kissinger is cited as asking "Is this your best work?". The author(s) push for excellence through action.

I found the authors doing a good job in the area of potential and reaching it. What's the difference between those who reach their potential and those who don't ? Those who do, bring a discipline with them to every task they face. They are willing continuously to challenge themselves. They keep learning how to get better because they do not accept falling short of their potential.

Please support this blog by viewing and purchasing the book from Amazon:

This book can become suddenly interesting and intensely revealing if you decide to read Northouse's LTP first. It is the perfect anti-dote to analysis by knowledge (knowing too much, but acting too little).

Sunday, October 28, 2012

SSH Module Installation on Strawberry Perl

Strawberry Perl doesn't come with SSH modules pre-installed. Here is how to insall
C:\>perl -MCPAN -e shell
cpan shell -- CPAN exploration and modules installation (v1.9304)
ReadLine support enabled
cpan> install Net::SSH
Fetching with LWP:
http://cpan.strawberryperl.com/authors/01mailrc.txt.gz
LWP failed with code[500] message[Can't connect to cpan.strawberryperl.com:80 (connect: timeout)]
As a last ressort we now switch to the external ftp command 'C:\WINDOWS\system32\ftp.EXE'
to get 'C:\strawberry\cpan\sources\authors\01mailrc.txt.gz.tmp3104'.
Doing so often leads to problems that are hard to diagnose.
If you're victim of such problems, please consider unsetting the ftp
config variable with
o conf ftp ""
o conf commit
I would like to connect to one of the following sites to get 'authors/01mailrc.txt.gz':
http://www.perl.org/CPAN/
ftp://ftp.perl.org/pub/CPAN/
Is it OK to try to connect to the Internet? [yes] yes
Fetching with LWP:
http://www.perl.org/CPAN/authors/01mailrc.txt.gz
LWP failed with code[500] message[Can't connect to www.perl.org:80 (connect: timeout)]
Fetching with LWP:
ftp://ftp.perl.org/pub/CPAN/authors/01mailrc.txt.gz
Fetching with LWP:
http://cpan.strawberryperl.com/modules/02packages.details.txt.gz
LWP failed with code[500] message[Can't connect to cpan.strawberryperl.com:80 (connect: timeout)]
Fetching with LWP:
http://www.cpan.org/modules/02packages.details.txt.gz
LWP failed with code[500] message[Can't connect to www.cpan.org:80 (connect: timeout)]
Fetching with LWP:
http://www.perl.org/CPAN/modules/02packages.details.txt.gz
LWP failed with code[500] message[Can't connect to www.perl.org:80 (connect: timeout)]
Fetching with LWP:
ftp://ftp.perl.org/pub/CPAN/modules/02packages.details.txt.gz
Fetching with LWP:
http://cpan.strawberryperl.com/modules/03modlist.data.gz
LWP failed with code[500] message[Can't connect to cpan.strawberryperl.com:80 (connect: timeout)]
Fetching with LWP:
http://www.cpan.org/modules/03modlist.data.gz
LWP failed with code[500] message[Can't connect to www.cpan.org:80 (connect: timeout)]
Fetching with LWP:
http://www.perl.org/CPAN/modules/03modlist.data.gz
LWP failed with code[500] message[Can't connect to www.perl.org:80 (connect: timeout)]
Fetching with LWP:
ftp://ftp.perl.org/pub/CPAN/modules/03modlist.data.gz
Creating database file ...
Gathering information from index files ...
Populating database tables ...
Done!
Running install for module 'Net::SSH'
Running make for I/IV/IVAN/Net-SSH-0.09.tar.gz
Fetching with LWP:
http://cpan.strawberryperl.com/authors/id/I/IV/IVAN/Net-SSH-0.09.tar.gz
LWP failed with code[500] message[Can't connect to cpan.strawberryperl.com:80 (connect: timeout)]
Fetching with LWP:
http://www.cpan.org/authors/id/I/IV/IVAN/Net-SSH-0.09.tar.gz
LWP failed with code[500] message[Can't connect to www.cpan.org:80 (connect: timeout)]
Fetching with LWP:
http://www.perl.org/CPAN/authors/id/I/IV/IVAN/Net-SSH-0.09.tar.gz
LWP failed with code[500] message[Can't connect to www.perl.org:80 (connect: timeout)]
Fetching with LWP:
ftp://ftp.perl.org/pub/CPAN/authors/id/I/IV/IVAN/Net-SSH-0.09.tar.gz
Fetching with LWP:
ftp://ftp.perl.org/pub/CPAN/authors/id/I/IV/IVAN/CHECKSUMS
Checksum for C:\strawberry\cpan\sources\authors\id\I\IV\IVAN\Net-SSH-0.09.tar.gz ok
Scanning cache C:\strawberry\cpan\build for sizes
DONE
CPAN.pm: Going to build I/IV/IVAN/Net-SSH-0.09.tar.gz
Checking if your kit is complete...
Looks good
Writing Makefile for Net::SSH
cp SSH.pm blib\lib\Net\SSH.pm
IVAN/Net-SSH-0.09.tar.gz
C:\strawberry\c\bin\dmake.EXE -- OK
Running make test
C:\strawberry\perl\bin\perl.exe "-Iblib\lib" "-Iblib\arch" test.pl
1..1
ok 1
IVAN/Net-SSH-0.09.tar.gz
C:\strawberry\c\bin\dmake.EXE test -- OK
Running make install
Prepending C:\strawberry\cpan\build\Net-SSH-0.09-cQfbZo/blib/arch C:\strawberry\cpan\build\Net-SSH-0
.09-cQfbZo/blib/lib to PERL5LIB for 'install'
Installing C:\strawberry\perl\site\lib\Net\SSH.pm
Appending installation info to C:\strawberry\perl\lib/perllocal.pod
IVAN/Net-SSH-0.09.tar.gz
C:\strawberry\c\bin\dmake.EXE install UNINST=1 -- OK
cpan>























































































Monday, October 22, 2012

Identify a “Big Ball of Mud” in Software

“A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.”
—Brian Foote and Joseph Yoder, Big Ball of Mud. Fourth Conference on Patterns Languages of Programs (PLoP '97/EuroPLoP '97) Monticello, Illinois, September 1997
  In a development team, people are focused day to day to complete enhancements and/or new tickets. Generally the initial work of setting up the architecture is long over. The understanding of the structures set up and shortcuts thereafter due to exception requirements and a need to deliver ‘yesterday’ usually end up as residue and new dependencies in the code. After a while it is easy to identify codebases that are deemed unfit for consumption.

How do you know if the codebase you’re dealing with is Spaghetti ?
If you’re reviewing new code – well that is simple: as you’re reading the code it makes you proud of the team or person that contributed to it.

There are a lot of other measures: Cyclomatic Complexity, N-Path metrics, dependency mapping, Unused code etc. It is mathematically impossible to determine all unused code via static analysis, however it can provide useful results. A runtime comprehensive usage metrics determination is difficult, depending on the complexity and data needs – it can be nearly impossible to detect unused code. At work, I am working on a strategy that is going to take a million plus lines of code and begin a “cleanup” project. Lining up spaghetti in a big bowl of mud will be challenging. I will post strategies I employ, tactics I use, the tools that prove useful, and the technologies that help in the coming month.




Monday, October 15, 2012

COTS versus FOSS

COTS, FOSS or FOSS+Support. Which one should you choose.

The answer: it depends. (Surprise)

Just because various software vendors don't invest in cross-platform software development doesn't mean you can't migrate to a new platform. COTS doesn't necessary mean vendor-locking, FOSS doesn't necessarily mean vendor independence and open standards.

This is the nature of competition between Free Open Source Software (FOSS) initiatives and established Commercial Off-The Shelf (COTS) Software manufacturers. Executives are faced with immeasurable intangibles and difficult decisions for IT investment. There are many ways to crack the puzzle.  Here are 5 important things to ask yourself:

Ease of Integration: Open Standards - do you need the solution to be flexible and have ease of integration?
Flexibility and extension: Do you predict a need to extend internal components or extend the core product?
Supportability - do you have internal IT operations that need to support the solution ? Do you have skills in-house to support and diagnose?
Cost - does it make sense to buy a product versus the support costs of FOSS?

These are a few factors that need to be evaluated. Solution architecture evaluations require a deep dive into specifics. ATAM(tm) is a framework for architecture evaluations that I have used, I plan to get certified and use it officially in engagements in the future.  It is a valuable guide to generate a Utility Tree and evaluate Quality Attributes. More to come ....

Sunday, October 14, 2012

Alignment, Motivation, Change & Commitment

If you have a large new strategic initiative, how do you get buy-in, commitment, alignment, motivation and change management communicated. According to research led by Jim Collins in his book “Good to Great”. You don’t.

In a large company like Kroger, the Level 5 CEO did not spend too much time to align 50,000 employees to the new strategy.
Level 5 leaders simply don’t worry about that upfront, rather they depend on turning, what Collins calls the “Flywheel”: let the flywheel do the ‘talking’. Executing and then repeating success of a strategy and communicating that allows people to extrapolate – people want to be part of a winning team.
Alignment, motivation, change and commitment takes care of itself. In my professional life, I have seen that happen – your strategy becomes everyone’s strategy! Everyone takes ownership and enjoy a shared success. It is possible, I have been part of it and recognize the ‘chemistry’.

Saturday, October 13, 2012

Intellectual Property: Current Trends and Issues in I.T.

Introduction

Open source software, out-sourcing software development and contract programmers pose intellectual property theft exposure for companies today.

More brick-and-mortar corporations are investing heavily in I.T. In-house software development teams come with additional responsibility and risk for the leaders. As more and more software products use component-based technologies there is an increased chance of using open-source products without understanding their licenses.

Consultants and contract workers are hired for software development projects in addition to permanent employees to reduce time to market. Software development work is outsourced to other countries to cut I.T. spending. All these strategies have once common negative aspect – violation of intellectual property rights and subsequent legal action.

In this paper, I explore these three strategies in brief detail and determine the risk and exposure relative to intellectual property violations.

Intellectual Property Issues in the I.T. Department

According to independent research conducted by Forrester, CIOs of $ 1 billion-plus companies cite “Intellectual Property Theft” as the type of IT security incident that poses “the most threat” to their company’s business (see Figure 1). Four out of ten CIOs don’t think they spend enough on the most important security threat. Although malicious code and intellectual property theft pose 60% of all risk, and 70% of CIO’s approve IT budgets – yet 40% think not enough is spent on security.

Most often the core differentiator of companies is its business processes, strategic information systems, and technology. Outsourcing forces the company to reveal its internal business processes to vendors. Certain companies do not have strict intellectual property laws. Forrester’s Stephanie warns “North American and European companies should not consider China a viable location for software development and maintenance support. The market is too immature, and the problems associated with this immaturity - a lack of English language skills, the legal and regulatory environment and lack of intellectual property laws - make China too risky today.”

Often open source software is used by IT teams to build software products. Several software frameworks are available to be downloaded for free. What several companies, architects, developers and programmers fail to comprehend is that open source is not the same as “free”. Open source software is licensed. However, most open source license types like Berkeley Software Distribution, Free Software Foundation, or General Public Licenses lack indemnification.

The “as-is” aspect of open source software is risky. There is a possibility that part of open source software “copied code” from some other licensed product. It is very difficult for the companies to identify or compare open source with licensed software products to identify theft. This exposes the company using open source software to lawsuits from companies claiming that the open source software violates their intellectual property rights.

(Figure 1)

Contract workers often are hired for short stints to work on software development and testing. This type of work needs full developer-access privileges on the source code. The obvious risk is to the code being stolen or exposed to others.

Three Regimes that protect IP

Trade secret classification, copyright and patents serve to protect intellectual property under law. In addition, compliance requirements of law such as Sarbanes-Oxley, Gramm-Leach-Bliley and HIPAA are driving software development shops to protect intellectual property, ensure privacy, and aim for correctness in development products and practices.

With time, Trade Secret laws are being tightened. Trade secret plaintiffs sometimes would couch their claims under other, alternative titles, such as "common law misappropriation," "unfair competition," or "breach of confidence." The tactic was often a deliberate ploy to avoid complying with state Uniform Trade Secrets Act [UTSA] statutes. California is the first state that pre-empts such attempts. As more states follow suits, trade secrets laws will be more and more effective.

1998 Digital Millennium Copyright Act that amended the copy-right statute, defeating any technological control that controls access to a computer program in order to make even a legitimate backup copy is infringement. Computer games almost always have copy protection built in, and defeating the controls would be infringement. DVDs are encrypted, another type of technological control.

Challenges to Intellectual Property by the Internet and Technology

Technology is an enabler for both innovation and crime. Companies spend millions in research, design and development. All this information is stored digitally in software files. These portable electronic files make theft easy. Software files can be copied to Floppy disks, CD-RW disks, memory sticks, or other digital RW media and sneaked out of facilities.

Files can be uploaded to web-sites or e-mails from a secure machine to the Internet. Worse, it is possible to install “spy-ware” that can regularly scan machines and upload files automatically.

Websites can screen-scrape or use portal technologies to “grab” published web-pages from other websites and present them as their own. Website mirrors can be created which give access to content of other protected websites.

Hardware theft can result in the same effect. A knowledge worker’s laptop containing critical engineering designs can be invaluable to the knowledge thief. CEOs have the greatest fear of loosing their PDA or laptops.

Conclusion

Although protection of intellectual property is a key issue in the United States, a challenge in the future will be to ensure the same standards across nations. The Patent Cooperation Treaty is a first step in that direction, while it is gaining support in developing nations, like Oman, it is yet to be seen as an effective measure against software piracy and intellectual property theft.

While laws and precaution protect intellectual property, the threat of exposure will continue to increase with technological advances. The proper use of technology is closely related to the ethical and social constituent of nations. At the core of the problem are people and their honesty and integrity. As long as money governs societal well-being, human greed for money will bulldoze over anything that comes in its way – including intellectual property rights.

Reference:

  1. Moore, Stephanie: Planning Assumption IT Trends 2004: Offshore Outsourcing. Forrester Research Report. (December 2003)
  1. Laura Koetzle with Charles Rutstein, Angela Tseng, Robert Whiteley. How Much Security Is Enough? Forrester Research Report.(August 2003)
  1. Nikos Drakos, Alexa Bona. Questions and Answers on Open-Source Licensing. Gartner Research. (October 2002)
  1. Vijayan, Jaikumar. Security Expectations, Response Rise in India. Computer World, Vol 38, No. 5. (August 30, 2004).
  1. Graves, Tait. A Trade Secret By Any Other Name is Still a Trade Secret The Intellectual Property Strategist. April 7, 2004, NEWS; Vol. 10; No. 7; Pg. 3
  1. National Commission on New Technological Uses of Copyrighted Works. Making backup copies violates law. Information Outlook, July 2004 v8 i7 p32(2)
  1. Business News Publishing. Patent Cooperation Treaty Oman joins the treaty. (2001)

Tuesday, October 9, 2012

Error Handling on your Web Presence is Important

InoxMovies.com

showed me the following error message

Server Error in '/' Application.

Server was unable to process request. --> Object reference not set to an instance of an object.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Web.Services.Protocols.SoapException: Server was unable to process request. --> Object reference not set to an instance of an object.
Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:

[SoapException: Server was unable to process request. --> Object reference not set to an instance of an object.]
System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431766
System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204
WebReference.SeatBook.ShowSeats(Int64 TheatreId, Int64 BookingId, String ShowClass, Int64 NoOfTickets, String PartnerId, String PartnerPwd) +195
seatlayout.Seat_Layout() +743
seatlayout.Page_Load(Object sender, EventArgs e) +3161
System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14
System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35
System.Web.UI.Control.OnLoad(EventArgs e) +99
System.Web.UI.Control.LoadRecursive() +50
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627





Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082


This is an example of what not handling Exceptions does to the user experience.

Monday, October 8, 2012

Google’s Big Data Stats

YouTube: 60 hours of video uploaded every 60 seconds.
Google Search Index Size: 100,000,000 GB (and growing)
GMail Active Users: 350,000,000 (and growing)
Search Response Time: 0.25 seconds

These numbers are astonishing. Reliability, Available, Scalable & Performance are Google’s primary quality attributes.
Data – core business asset and few low hanging fruit, growth is faster than the ability to understand it, data capture is slower than the data getting generated, traditional BI tools can’t scale to capture it.
Google has innovated Map Reduce, HDFS, HBase are used by Google to solve for these requirements.

Sunday, October 7, 2012

7 Habits of Ineffective People–an inverse corollary

Once of my favorite non-fiction books is The Seven Habits of Highly Effective People by Steven Covey. Sometimes it’s helpful to apply an inverse angle to see if it sticks. And it’s fun.

So here’s my inverse corollary:
#1. Procrastinate until it’s urgent & important.
#2. Plan as you go
#3. Let tasks to automatically prioritize themselves
#4. Try to win at all costs
#5. Ensure others understand what you’re saying first
#6. Work alone and be a hero
#7. Learn just-in-time and on the spur
Of course this is not what’s in the book – it’s pretty much a contrast of it. Often times we read books/articles that list what one must do, and to make this post a bit more interesting & fun it tells you what not to do. Additionally It may be more instructive to learn from other’s experiences and try to avoid pitfalls and bad habits.
 


Sunday, September 30, 2012

My response to an expertise request

Solution Key Components:
* Ability to ingest upto 3,000 messages per second from external domain
* Messages can be in XML, EDI, etc format. ~20 kb each.
* Transform into canonical format
* Perform authorization & authentication
* Break message into multiple messages (achieve parallelism)
* Route based on load, content to message queues (JMS, etc)
Function 1:
* Message queue clusters bucket and shoot to Rules engine
* Rules engine devises appropriate action, forwards to event handlers
* Event handler integrates with e-mail service to send an e-mail
Function 2:
* Message hits Data Access layer for storage
* Looking to store Transaction-type data (1bn records per month)
* Need quick retrieval techniques. Ensure Data consistency/quality
* CRUD operation time should be under 0.1 seconds
Web Component:
* Request comes in from an external site for a iFrame request
* Request needs to be authenticated/authorized/load-balanced
* User information should be cached right @ log-in, so cache can retrieve data we expect the user to view, so we don't retrieve when the user starts navigating
* Planning to have an Active/Active multi-site design
* Don't want to do sticky sessions
* Should we have a distributed cache with regions replicated across sites to avoid sticky sessions?
* Web layer needs to handle 500 concurrent requests minimum
* Overall solution primarily designed on-premise (Virtualized environment) with DR-site on public cloud

Solution Architecture
After thinking about the limited problem statement, it seems that security, scalability, data transformation, messaging, declarative rules, HA and persistence are key.  A good architecture style for the solution design would be SEDA See this for reference: http://www.eecs.harvard.edu/~mdw/proj/seda/
The solution should be broken down into a set of stages where a component or a set of components interacts with the data and performs logic. Each stage is connected by queues. An orchestration layer can govern the path depending on the routing logic.
Layer 1: Rules Engine. Fronted with incoming and outgoing queues.
Layer 2: Routing Engine: Choreographs the processes.
Layer 3: Executing Engine: Contains logic and data access.
Layer 4: Presentation: Web Server/App server with logic.
0.1 seconds for data retrieval is enough latency to avoid the complexity of managing a distributed cache. Content caching and Akamai edge cache can be useful instead.
Technologies: Use IBM’s MQ Series to define the queues, clusters etc based on logical names. Install Mule ESB to implement SEDA and routes. Bring in iLog or Aion for the declarative rules engine. Host on internal cloud or EC2.
 

Wednesday, September 26, 2012

How would you solve for these requirements?

Sometimes I get expertise requests, and here is a challenge.

Solution Key Components:
* Ability to ingest upto 3,000 messages per second from external domain
* Messages can be in XML, EDI, etc format. ~20 kb each.
* Transform into canonical format
* Perform authorization & authentication
* Break message into multiple messages (achieve parallelism)
* Route based on load, content to message queues (JMS, etc)
Function 1:
* Message queue clusters bucket and shoot to Rules engine
* Rules engine devises appropriate action, forwards to event handlers
* Event handler integrates with e-mail service to send an e-mail
Function 2:
* Message hits Data Access layer for storage
* Looking to store Transaction-type data (1bn records per month)
* Need quick retrieval techniques. Ensure Data consistency/quality
* CRUD operation time should be under 0.1 seconds
Web Component:
* Request comes in from an external site for a iFrame request
* Request needs to be authenticated/authorized/load-balanced
* User information should be cached right @ log-in, so cache can retrieve data we expect the user to view, so we don't retrieve when the user starts navigating
* Planning to have an Active/Active multi-site design
* Don't want to do sticky sessions
* Should we have a distributed cache with regions replicated across sites to avoid sticky sessions?
* Web layer needs to handle 500 concurrent requests minimum
* Overall solution primarily designed on-premise (Virtualized environment) with DR-site on public cloud

I will post my take based on these needs. Also, it will be fun to look back at it after a while as new technologies & solution options evolve.

Sunday, September 23, 2012

A Culture of Discipline

Those who try to manage change, motivate employees and work to create alignment are negatively correlated with companies that move from good to great. To get a company to top gear – a culture of discipline is critical.

Think about it – a CULTURE of DISCIPLINE. This means that everyone does what they are supposed to without stepping on toes, do what they do best and are diligent about the task at hand. No one has to remind them and they produce excellence for the team. And then you get a team of teams, a culture that pervades the organization. This is what is required for an organization that transforms itself.

I am re-reading this book “Good to Great” by Jim Collins. It’s a classic. I really like books that dwell on evidence and then synthesize that into evidence-based recommendations. I have taken a few of those recommendations and internalized them into traits, I present a few in “Traffic Signals” here.

So here are a few traits of Great companies -

image

Red: Negative Correlation (with Good to Great)

Green: Positive Correlation

Black: No difference.

The culture is an enabler to excellence. It is the HOW.

The 3 circles need to intersect to understand the WHAT.

The hedgehog concept revolves around , in my mind 3 questions

Do you love what you’re doing?

Do you excel at what you’re doing?

Does it pay ?

If the answer is true – the you’re excelling and if its true for the organization – it is poised to be great in the future.

A good pre-read is 7 Habits…because it can link the individual habits of successful individuals to the culture required for greatness in an organization.

Sunday, May 6, 2012

Trade offs

In the 70s scientists noted that people who had hookworms did not have allergies and asthma. Yuck – who wants hookworms? Well do you want asthma instead ? Nature is all about trade offs. The obvious symbiosis is the result of intricate trade offs at every level. This logic applies in various domains including Software Architecture. An analysis of the architecturally significant trade offs is essential to objectively understanding any complex system relative to risk themes.

Comprehensively analyzing software architectures can be simplified by working along significant attributes. ATAM (Architecture Trade off and Analysis Method) from SEI was developed by Carnegie Mellon University. Between 2006-2009 I visited Carnegie Mellon and took several courses and got myself certified as a Software Architect.  The primary benefit was to gain an appreciation of trade-off analysis based on specific attributes across facets of architecture.

It is another tool in the strategic and objective thinking toolbox that every enterprise architect needs. For more information: http://www.sei.cmu.edu/architecture/tools/evaluate/atam.cfm



Monday, March 5, 2012

Building a Stronger Team that YOU work in

Anyone who has worked in a team knows that there is no simple answer for team success. Each employee/team-member offers a unique perspective and comes from a unique background (typically). This is even true in homogeneous teams, or centers of excellence. Building and sustaining team morale can be difficult especially in homogenous teams because direct comparisons can be drawn very quickly.

I recently read a general article, http://www.inc.com/jeff-haden/the-5-qualities-of-remarkable-bosses.html , this outlines a list of things bosses should do to build and sustain remarkable teams. I could not find myself agreeing more.

 

1. Develop every employee.

2. Deal with problems immediately.

3. Rescue your worst employee.

4. Serve others, not yourself.

5. Always remember where you came from.

The list above outlines how to be a good boss. However if you are team member these rules apply as well.


In order to help continue to build a stronger team, there may be a few more that need to be added to the list.

6. Build Trust: this is the nucleus of the team. Without a core sense of trust in one another, the team can breakdown into clusters and tuples.

7. Enable & even enforce team transparency: In order to build trust, establishing a level playing field of opportunities within the team. “The right person on the right seat in the bus” – Collins.

8. Ensure consistent and equal visibility: In an idea economy  it is very important to ensure that the right contributors are cited and given the opportunity to present. 

9. Enable and enforce core value from the ground-up: developing a core values set –a set of team values/principles and operational rules is very critical.

10. Be a follower: over time every team member must learn and demonstrate behaviors of a good follower.

As a leadership team member, each member must play a dual role sometimes – leader and follower. There is so much spotlight on being a good leader today, that people don’t have the skills anymore of being a good follower. Oftentimes a good leader is also a good follower.

Is WSJF "better" than traditional ROI calculations for Applications?

I love road trips, and i like analogy.   The Premise: Two couples are planning a road trip. The "Perfection" group: This group spe...