Sunday, January 27, 2013

Protocols for the Security Stickler

Data communications channels are often insecure, subjecting messages transmitted over the channels to passive and active threats (Barkley, 1994). Internet protocols connect various networks and data packets are transmitted over them. An entire protocol stack exists over which computers exchange messages. For example, Web-browsers sent Hyper-text Markup Language (HTML) messages over the Hyper-text Transfer Protocol (HTTP) which sits on top of the TCP/IP stack. Additional protocols are now in places that create secure channels for such communication, Secure Sockets Layer (SSL) sits between the HTTP and TCP/IP protocols, so for secure Web-page transfers HTTP is transmitted over the standard port 443 of SSL rather than the unsecured port 80 assigned to HTTP. Together this results in HTTPS (HTTP over SSL) communication. Secure socket layer and TLS are security protocols primarily used for network transport of messages.

Secure Sockets Layer

The Secure Sockets Layer (SSL) protocol is a security protocol that provides communication privacy over the Internet by allowing client-server applications to communicate in a way that is designed to prevent eavesdropping, tampering or message forgery (Freier, Karlton & Kocher, 1996). SSL is composed of a handshake protocol and a record protocol, which typically sits on top of a reliable communication protocol like TCP. SSL evolved into its latest version 3.0 resulting in the transport layer security protocol.

Transport Layer Security

The primary goal of The Transport Layer Security (TLS) protocol is to provide privacy and data integrity between two communicating applications; this is used for encapsulation of various higher level protocols (Dierks & Allen, 1999, p. 3). The TLS is actually a combination of two layers, the TLS Record Protocol and the TLS Handshake Protocol. The TLS Record Protocol has two basic properties: connection privacy and reliability. The TLS Handshake protocol has three basic properties: peer identity authentication, shared secret negotiation, and negotiation reliability.

One advantage of TLS is that is independent of the application protocol (Dierks & Allen, 1999, p. 4). Higher-level protocols can be layered on top of this protocol. This leaves the decision of TLS initiation of handshaking and authentication certificate exchanges to the judgment of higher-level protocol designers. The primary goals of the TLS protocol, thus, are to provide cryptographic security, interoperability, and extensibility. These are fundamental requirements of enterprise security.

 

Sunday, January 20, 2013

Message Digests and Keys

A message digest is analogous to the hand signatures in the real world. Digests are a convenient and useful way of authenticating messages.

Web-o-pedia defines message digest as:

The representation of text in the form of a single string of digits, created using a formula called a one-way hash function. Encrypting a message digest with a private key creates a digital signature, which is an electronic means of authentication (p.1)

A message in its entirety is taken as input and a small fingerprint created, this message along with its unique fingerprint is sent with the document. When the recipient is able to verify the fingerprint of the document it ensures that the message did not change during transmission. A message may be sent in plain text along with a message digest in the same transmission. The idea is that the recipient would be able to verify that the plain text was not transmitted unaltered by examining the digital signature. The most popular algorithm for message digests is the MD5 (IrnisNet.com, n.d.). Created at Massachusetts Institute of Technology, it was published to public domain as Internet RFC 1321.

MD5

The MD5, developed by Dr. Roland R. Rivest, is an algorithm that takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input (Abzug, 1991). While not mathematically proven, it is conjectured that it is not feasible to create a message from the digest. In other words, it is computationally infeasible to “produce any message having a given pre-specified target message digest” (Abzug, 1991).

MD5 is described in the request for comment (rfc) 1321. Rivest (1992) summarized MD5 as:

The MD5 algorithm is an extension of the MD4 message-digest algorithm. MD5 is slightly slower than MD4, but is more "conservative" in design. MD5 was designed because it was felt that MD4 was perhaps being adopted for use more quickly than justified by the existing critical review; because MD4 was designed to be exceptionally fast, it is "at the edge" in terms of risking successful cryptanalytic attack. MD5 backs off a bit, giving up a little in speed for a much greater likelihood of ultimate security. (p.3)

Message digest 5 is an enhancement over MD4 – Rivest (1992) describes this version as more conservative as its predecessors and easier to codify the algorithm compactly. The algorithm provides a fingerprint of a message of any length. In order to come up with two messages (plain text) resolving to the exact same fingerprint is of the order 2 to the power of 64 operations. To reverse-engineer a fingerprint with a matching plain text message required 2 to the power of 128 operations. Such great numbers provide current computational infeasibility.

SHA-1

The Secure Hash Algorithm 1(SHA-1) algorithm is an advanced algorithm adopted by the United States of America as a Federal Information Processing Standard. SHA-1, as explained in the RFC 3174, is employed for computing a condensed representation of a message or a data file (Jones, 2001). This algorithm can accept a message of any length (theoretically less than 2 to the power of 64 bits); the output is a 160-bit message digest that is computationally unique to the input given. This signature can be used for validation against the previous signature.

Demonstration. For example, if the user registers with a password “purdue1234” the SHA-1 algorithm can be applied which will result in a 160-bit “8ad4d7e66116219c5407db13280de7b4c2121e23”. This digest can be saved in the database instead of the plain text password the user registers with. The next time the user signs on with the same plain-text password – it will get converted to the same signature which can then be compared to authenticate the user. If the user enters a different password say “rohit1234” the SHA-1 digests it as “fb0f57cb70fbd8926f2912585854cbe4bcf83942”. This triggers a mismatch and the authentication fails. The algorithm guarantees to generate the same 160-bit signature given the plain-text, and it is computationally infeasible to reverse the digest into the plain-text. Therefore even if the database is “hacked” the passwords will not be usable. This is one of the most common techniques employed in the industry for saving sensitive data that only needs to be verified and not reused.
DSA

Digital Signature Algorithm (DSA) is an algorithm inherited from the National Security Agency (NSA) and published by the National Institute of Standards and Technology (NIST) in the Digital Signature Standard (DSS) as part of the United States government’s Capstone project (RSA Laboratories, n.d.). In order to gain a better understanding of DSA, the discrete logarithm problem needs to be explained. RSA Laboratories documentation explains that for a group element g, if g is multiplied by itself n times, it is represented by gn ; the discrete logarithm problem is as follows: take two group elements g and h which belong to a finite group G, find an integer x such that gx=h. The discrete logarithm problem is a complex one, it is considered more complex and a harder one-way function than those algorithms that are based on the factoring problem.

Algorithm implementations that have emerged are quick with a big of “O(O(n))”. The big-O notation is a theoretical measure of the execution of an algorithm usually the time or memory needed given the problem size n, which is usually the number of item (NIST, 1976). Signature verification is faster than signature verification, whereas with the RSA algorithm the verification is much faster than the generation of the actual digest itself (RSA Laboratories, n.d.). Initial criticism of the algorithm surrounded around the lack of flexibility when compared with the RSA cryptosystem, the verification performance, adoption issues as cited by hardware and software vendors that had standardized on RSA, and finally the discretionary selection of the algorithm by NSA (RSA Laboratories, n.d.). DSA has now been incorporated by several specifications and implementations. This can now be considered a good choice for adoption by the enterprise.

Secret Keys

Two general types of cryptosystems have evolved over the decades: secret-key cryptography and public-key cryptography. In secret-key cryptography, as the name suggests, a key is maintained and kept secretive from the public domain, only the recipient and the sender have knowledge of the key. This is also known as symmetric key cryptography. In a public-key cryptography system, two keys play a role in ensuring security. The public key is well published or can be requested, the private key is kept secret by the individual parties. This scheme requires a Certificate Authority such that tampering of public keys is prevented. The primary advantage of this scheme over the other is that no secure courier is needed to transfer the secret key. The main disadvantage is that broadcasting of encrypted messages is not possible.

Symmetric Keys

This scheme is characterized by the use of one single key that can encrypt and decrypt the plain text message. The encryption and decryption algorithms now exist in the public domain, the only way this scheme can be used is by the knowledge of a key. If the key is known only to the parties that are in a secured communication mode, secrecy can be provided (Barkley, 1994). When symmetric key cryptography is used for communications and the messages are intercepted by a hacker, it is computationally infeasible to derive the key or decrypt the message from the cipher even if the encryption algorithm is known. The cipher can only be decrypted if the secret key is known. Because the secret key is known only by the message sender and the message receiver, the secrecy of the transmission can be guaranteed.

MAC. While secrecy can be guaranteed the integrity of the message cannot be guaranteed. In order to ensure that the message has integrity, a cryptographic checksum called the Message Authentication Code (MAC) is appended to the message. A MAC is a type of message digest, it is smaller than the original message, a MAC cannot be reverse engineered, and colliding messages are hard to find. The MAC is computed as a function of the message being transmitted and the secret key (Barkley, 1994). This is done by the message originator or the sender.

Asymmetric Keys

Asymmetric key cryptography is different in the sense that there is only one key that is well known to both parties and another set of keys that is private. This scheme is also known as public-key cryptography. The public key is used to generate a function that transforms text (Barkley, 1994). The private key is secret and is known only to the parties who own their respective public keys. The public keys are meant to be distributed. Both the keys are part of a pair and either one can be deemed public and the other private. Each key generates a transformation function, because the public key is known its transformation can be derived and be made known also. In addition, the functions have an inverse relationship. If one function encrypts a message the other can be used to decrypt it (Barkley, 1994). How these transformation functions are used is as follows: the public key of the destination is requested, the sender uses the public key of the destination and transforms the data to be transmitted using it. The sender then transmits the encrypted data to the desired sender. Note that the transmission of the data is encrypted and can only be decrypted by the other pair of the public key that was used. The private key of the receiver can decrypt the message. The receiver uses the private key after receiving the encrypted message and then uses it to decrypt the message, after which the message can be consumed.

The advantage of such a scheme is that two users can communicate with each other without having to share a common key; usually with symmetric key cryptography a common key is saved. The common key which is usually a secret key is not something that should be shared in the first place. Also, distribution of secret keys adds to the layer of complexity associated with the security of the system. Using public-key cryptography this issue is easily resolved. Because it is computationally infeasible for the private key to be derived from the public key, it is also, therefore, infeasible to decrypt the message encrypted with the public key. While there is convenience there is an issue with the inefficiency of the mechanism. The time taken to complete the encryption of plain text can take a long time; also the length of the cipher text can be longer than the plain text message itself. Also, distribution of messages is not possible because the private key is held by only one principal. Therefore it is not possible to use this scheme for encrypted broadcasts. Applications for public-key cryptography are often seen in the enterprise: authentication, integrity and non-repudiation.

Sunday, January 13, 2013

Cryptography–It’s the Key

Julius Caesar encrypted messages so that the messenger could not understand the cipher (Faqs.org, 2003). A “shift by 3” function was used i.e. he substituted A by D, Z by C etc. Only the recipient, who knew the key, three in this case, could decipher the message. A cipher system is a way of disguising messages such that only the recipients with the knowledge of the ‘key’ can decipher it. Cryptography is the art of using cipher/crypto systems. Cryptanalysis is the art of deciphering the encrypted message without prior knowledge of the key means other than the intended.

http://users.telenet.be/d.rijmenants/pics/codewheel.gif

A strong cryptosystem has a large key space, it will certainly produce cipher text which appears random to all standard statistical tests and it will resist all known previous attacks (Faqs.org, 2003). Several types of cryptography and standards exist today. Public Key Cryptography Standards (PKCS) is an important security standard, it defines a binary format that can be used for storing certificates. Public key cryptography and shared key cryptography can also use message digests – this is a one-way has function.

Sunday, January 6, 2013

Authentication–Who Are You ?

Authentication

Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be (SearchSecurity.com, n.d.). Verifying an identity claim is more complex than it appears to be upfront. There are several authentication methodologies, several security protocols, encryption schemes and hashing algorithms. There is no “best” security solution. For every implementation, it is important to establish the best possible options available.

Authentication Types

Authentication has existed well into the history of ancient human civilization. In the enterprise environment, it is increasingly becoming important that the authentication architecture is well defined. It is common for the user to enter authentication credentials. Several types of authentication methods exist today. Entities can be authenticated based on secret knowledge (like username and password combination), biometric (like fingerprint scans), and digital certificates. In private and public computer networks (including the Internet), authentication is commonly done through the use of logon passwords (Faqs.org, 2003).

Knowledge-based authentication

The most common authentication type is the knowledge-based. An identification key along with a secret pass code is required to access systems protected by such an authentication scheme. A user-id and password challenge screen is commonly seen in web-based email systems. The knowledge of the password is considered secret and is considered enough information to let the user in.

Both ends of a session must have the secret password (and/or key) in order for authentication to take place. The password also needs to be transmitted from the principal’s location to the principal authenticator’s location (Jaworski, Perrone & Chaganti, 2001). This leads to an obvious exposure. The link between the locations needs to be secured such that snoop attempts are not possible or data deciphering infeasible. One way of securing such systems is by the use of Kerberos. This is a password-base authentication system where a secret symmetric key is used to cipher and decipher passwords.

Biometrics-based authentication

Authentication based on biometrics is still in its infancy. Unique attributes extracted from individuals are used for authentication. Fingerprints, hand geometry, facial recognition, iris recognition, and dynamic signature verification are some of the more prominent biometric technologies. Biometrics by themselves is not fool-proof technologies, there are several potential ways to hack into such systems, and this risk presents additional concern relative to privacy protection. While research is in progress for revocable biometric tokens, this technology is not commercially implemented on a mass scale yet.

Certificate-based authentication

This technique has grown in popularity in recent years. What is a certificate? A certificate is just data that identifies a principal. Important information contained in the certificate is the public key of the principal, the validity dates of the certificate and the digital signature issued by the certified issuer (Jaworski, Perrone & Chaganti, 2001). The signer uses its private key to generate a cipher text called signature from a block of plain text. This cipher can only be decrypted using the signer’s public key – this ensures that the signature was actually signed by the signer, because the private key, as the name suggests, is secret.

This technology has significant advantages when encrypted plain text needs to be sent across otherwise unsecured connections. On a client-server architecture enabled with both server-side and client-side certificates, both parties can send encrypted information which each side knows came from the other. This is because only the public keys can decrypt the information that the private key encrypted.

A well-know certificate, often called the root or Certificate Authority (CA), signs the server’s public key with its private key. This way no hacker can create a false public key and pretend to communicate with signed data with the assumed keys. The distribution of the root public keys is done in a secure fashion; browsers come pre-configured with these keys.

Several certificate implementations have evolved, most significantly the X.509 v3 standard. This standard allows several different algorithms to be used for creating digital signatures. The X.509 contains information about the version of the certificate, the serial number information, information identifying the signature algorithm and its parameter, the CA identity and signature, the dates of validity and the principal identity and public key.

Sunday, December 30, 2012

Guaranteed Integrity of Messages

The ability to guarantee the integrity of a document and the authentication of the sender has been highly desirable since the beginning of human civilization. Even today, we are constantly challenged for authentication in the form of picture identification, personal hand signature and finger prints. Organizations need to ensure authentication of the individual and other corporations before they conduct business transactions with them.

http://www.gfi.com/blog/wp-content/uploads/2009/05/security-integrity-availability-confidentiality.jpg

When human contact is not possible, the challenge of authentication and consequently authorization increases. Encryption technologies, especially public-key cryptography provide a reliable way to digitally sign documents. In today’s digital economies and global networks digital signatures play a vital role in information security.

Sunday, December 23, 2012

Security–the most important Quality Attribute

While digital signatures and encryption are old technologies, their importance is renewed with the rapid growth of the Internet. Online business transactions have been growing at a rapid pace. More and more money transactions occur electronically and over the Internet. Non-repudiation is important when personal contact is not possible. Digital signatures serve that purpose. Encryption ensures that information sent for the intended party can only be read, unaltered by that party. Several technologies support encryption.

The enterprise security model consists of domains that get protection from resources not permitted to access or execute functions. There is a clear distinction between authorizing a resource and authenticating a resource. When a person shows a driver’s license at the bar before he gets a drink, the bar tender will look at it and compare his photograph with the actual person presenting it. This is authentication. When he checks the date of birth for legal drinking age, he has authorized the requester for the drink.

In the corporate environment, it is exceedingly important that the same form of authentication and authorization take place digitally. With new business channels open on the Internet, web applications deployed on the intranet for employees, and business-to-business (B2B) commerce channels created on the extranet, millions of dollars worth of transactions occur.

Business critical information is passed on the wire between computers, which if exposed to the general public or in the wrong hands could be disastrous to the company in question. For every business that exists there is a threat to the business. For e-business initiatives the anonymity of the network, especially the Internet, brings new threats to information exchange. It is important that information is exchanged secretly and confidently.

DSV and Custody Chaining

Dynamic signature verification (DSV) is the process by which an individual’s signature is authenticated against a known signature pattern. Dynamics of the process of creating a signature is initially enrolled into the authenticating system, which is then used to compare the future signature patterns. Several factors including speed, pressure, acceleration, velocity and size ratios are taken into account. These measurements are then digitized and stored for comparison later.
Signatures have long been used to authenticate documents in the real world, before the technology wave, signatures, seals and tamper-proof envelopes were used for secure and valid message exchange. With the onset of technology and digital document interchange, a growing need for authenticating digital documents has emerged.
Digital signatures had emerged in the 1970s as a means of developing a cipher of fixed length from an input of theoretically unlimited length. The signature is expected to be collision free and computationally infeasible to reverse into the original document. Both handwritten signatures and digital signatures have to comply with the basic requirements of authenticity, integrity, and non-repudiation (Elliott, Sickler, Kukula & Modi, n.d.).
In the information technology departments of corporations, documents are regularly exchanged between teams, companies, out sourced contract workers, internal consultants and executive management. These documents are often confidential and contain company secrets. However, due to resource constraints such documents are often shared with consultants and contract workers.

It is therefore a viable solution to provide digital signatures on those documents using proper authentication protocols. One way this could be achieved would be through dynamic signature verification. An interface that can create unique digital signatures from the physical dynamic signature and apply it to the electronic document would be ideal.
The requirement of a verifiable trusted signature creation technique for enterprise-wide document collaboration is required. DSV is an ideal technology suited for this purpose. Sensitive documents can be signed using a DSV module which can electronically sign the e-document. The document can be then shared with confidence that it has not been altered in transit and the recipient will be able to trust it.





Sunday, December 16, 2012

Fingerprinting and Biometrics at Airports

I was unpleasantly surprised to see longer than usual lines at the international port of entry at O’Hare this February. My flight connected me to O’Hare International at Chicago from Schiphol Airport at Amsterdam, Netherlands. It was a long flight and it wasn’t apparent to me the reason for the delay in processing passengers. A huge line of people with hand luggage zigzagged what appeared to be a large hall, the end of the line fading in the distance. I was tired and wanted to get to my apartment and I did not believe I would ever get there at this rate.

In a 2004 article published on New Scientist, Will Knight reports that the Department of Homeland Security (DHS) initiated the installation of a fingerprinting system. A total of 115 airports have the biometric security equipment installed (Knight, 2004).A DHS officer made the comment to Knight that “it takes each finger scan takes just three seconds and pilot schemes produced just one error in every thousand checks” (Knight, 2004).

http://eyetrackingupdate.com/wp-content/uploads/2011/02/digital-fingerprint-scan-300x300.jpg

The early morning long lines brought back memories of the traditional waits outside the U.S. Consulate general in India where the visas are issued. It is said that heat, rain nor storms get in the way of ticket seekers to paradise itself – the United States of America. Visa applicants are happy to divulge their fingerprint for an entry permit into the USA.

Knight (2004) cites Bruce Schneier, founder of the US security consultancy firm Counterpane, who believes that gathering more information through this method is only collecting more data while the problem with security lays in a lack of intelligence not the amount of data . Schneier believes that there is enough data already available but not enough intelligence to process it. He goes on to explain that the terrorists who crashed airplanes into buildings on September 11,2001 had valid passports and were not on previous terrorist watch-lists.

The U.S. immigration officer asked me to wet my left and right index finger and place it on the fingerprint sensor, just like the Visa officer had asked me to do in India. The visa had been issued at the end of the day – a very long day. There was a camera placed along with the fingerprint sensor. No pictures were taken in either place. I placed my finger, the immigration officer instructed me to wait. The computer system looked up my fingerprint compared it with their databases in what seemed like an eternity. Finally, the immigration office smiled back at me and let me proceed. I still had to go to baggage collection and customs; I feared more divulgence of impressions from body parts. Thankfully there were none. After ninety minutes of baby-steps through the immigration lines and multi-finger scans at the Chicago O’Hare airport I was free to step into the “Land of the free, home of the brave”.

Sunday, December 9, 2012

SOA 2004–a blast from the past or what I thought about it back then

I wrote up some views on Service Oriented Architecture in 2004. This was a time when XML was a buzzword and people were wondering and writing about SOA. I was implementing a leading edge solution for a policy administration system using an ACORD XML interface and hosting Internet B2B services for independent agencies. A soup to nuts solution that included XML, SOAP, WSDL, Java EE, EJB and RDBMS + COTS.

I also wrote this unpublished paper:

Introduction

This is the most important decade for distributed computing. Reuse and interoperability are back in a big way for distributed applications. Over the years, several types of reuse methodologies have been proposed and implemented with little success: procedure reuse, object reuse, component reuse, design reuse etc. None of the methodologies tackled interoperable reuse. Enter web-services. Web-services are big and everyone in the industry is taking this seriously. Web services are reusable services based on industry-wide standards. This is significant because it could very well be the silver bullet for software reuse. Software can now be reused via web services and applications can be built leveraging Service Oriented Architectures. This paper relates Service Oriented Architectures and highlights its significance and relationship to web-services.

Distributed Software Applications

Software applications deployed across several servers and connected via a network are called distributed applications. Web-services promise to connect such applications even when they may be deployed across disparate platforms in a heterogeneous application landscape. Cross-platform capabilities are one of web-service’s key attractions because interoperability has been a dream of the distributed-computing community for years (Vaughan-Nichols, Steven J.). In the past, distributed computing was complex and clunky. Previous standards like CORBA (Common Object Request Broker Architecture), RMI (Remote Method Invocation), XML-RPC (Extensible Markup Language – Remote Procedure Calls), and IIOP (Internet Inter-ORB Protocol) were used for distributed applications and information interchange; these were not based on strict standards.

Sun Microsystems’s RMI (Remote Method Invocation) over JRMP (Java Remote Method Protocol) was the next revolution of distributed computing. JRMP required both client and server to have a JRE(Java Runtime Environment) installed. It provided DGC (Distributed Garbage Collection) and advanced connection management. With the release of its J2EE specification, Sun introduced EJBs (Enterprise JavaBeans). EJBs promised to support both RMI over JRMP and CORBA IDL (Integrated Development Language) over IIOP (Internet Inter-ORB Protocol). Distribution of these beans (read objects) and transaction management across topologies seemed to be a blue sky dream that never materialized. In addition, the J2EE standard was not envisioned to be a truly enterprise standard – in the sense that integration with other object oriented platforms was not “graceful”. Microsoft introduced .Net and C# that directly compete with J2EE and Java. The continued disengagement between these two major platforms has reached its threshold. It has became imperative that there be a common cross-platform cross-vendor standard for interoperability of business services. Web-services seem to have bridged the gap in the distributed computing space that no other technology has in the past: standardize the interoperability space.

Dublin Core Metadata Glossary defines interoperability as:

The ability of different types of computers, networks, operating systems, and applications to work together effectively, without prior communication, in order to exchange information in a useful and meaningful manner. There are three aspects of interoperability: semantic, structural and syntactical.

Vaughan-Nichols (2002) states that web-services enables interoperability via a set of open standards, which distinguishes it from previous network services such as CORBA’s Internet Inter-ORB Protocol (IIOP).

Web Services

The word “service” conjures up different connotations to different audiences. We need to understand what a service is not. One damaging assumption for service is that it is another term for component(Perrey & Lycett, 2004). Component-orientation, object-orientation and integration based architectures are in the same space and are often a source of confusion.

Service-Architecture defines a service: “A service is a function that is well-defined, self-contained, and does not depend on the context or state of other services.” Perret and Lycett, attempt to define “service” by unifying its usage context by business, technical, provider and consumer. They describe and contrast multiple perspectives on “service” in detail. “The concept of perspective is the key to reconciling the different understandings of service. Business participants view a service (read business service) as a unit of transaction, described in a contract, and fulfilled by the business infrastructure.” They contrast this with the technical participant’s perception of a service as a “unit of functionality with the semantics of service described as s form of interface”. The authors go on to define a service: “Service is functionality encapsulated and abstracted from context”. They argue that the contrasting perceptions of services are really not an issue as long as there is commonality in the underlying perception. The commonality seems to lie in the reuse of services.

“Web services can be characterized as self-contained, modular applications that can be described, published, located and invoked over a common Web-based infrastructure which is defined by open standards.” (Zimmermann, Milinski, Craes, & Oellermann, 2004)

The Web Service Architecture

We are on the cusp of building “plug-compatible” software components that will reduce the cost of software systems at the same time increase their capabilities (Barry, 2003). Applications can be built on architectures which leverage these services. The goal is for service-oriented architectures to be decoupled for the very services it invokes.

Service-oriented architecture leverages the interoperability of web-services to make distributed software reusable.

Web-services makes the process more abstract than object request brokers by delivering an entire external service without users having to worry about moving between internal code blocks(Vaughan-Nichols, 2002). A recent Yankee Group survey results showed that three out of four enterprise buyers plan on investing in SOA (Service-oriented Architecture) technology within one year(Systinet, 2004).

Interoperability is driven by standards, specifications and their adoption. A service operates under a contract or agreement which will set expectations, and a particular ontological standpoint that influences its semantics (Perrey & Lycett, 2003). Applications that expose business processes with web-services are simpler to invoke and reuse by other applications because of pre-defined contracts that the service publishes. Web-services are interoperable and service-oriented architecture enables reuse, as a result SOA and web-service have formed a natural alliance(Systinet, 2004).

The collection of web-service specifications enables a consortium of vendors with their own underlying implementations of these standards to compete viably in the reuse and interoperability market. This is good because the competition is limited to the implementation level as opposed to the standards-level. Vendors will enable a compliant-based marketplace for distributed applications which expose web-services. This would enable SOA-based web-services to consistently search and leverage services in a business domain, via well-known public, private or protected registries, that are compliant with these standards.

Practitioners have used web-services for interoperability successfully in large systems:

“To achieve true interoperability between Microsoft (MS) Office™/.NET™ and Java™, and to implement more than 500 Web service providers in a short time frame were two of the most important issues that had to be solved. The current, second release of this solution complies with the Web Services Interoperability (WS-I) Basic Profile 1.0. Leveraging the Basic Profile reduced the development and testing efforts significantly” (Vaughan-Nichols, 2002).

The Communication Protocol

While web-services are primarily meant to communicate over HTTP (Hyper Text Transfer Protocol) they can communicate over other protocols as well. SOAP (not an acronym) popularly misrepresented as an object-access protocol is the primary message exchange paradigm for web-services. SOAP is fundamentally a stateless, one-way message exchange paradigm(W3C, 2004).

Interoperability is driven by standards, specifications and their adoption. True interoperability between platforms is achieved via SOAP (Zimmermann et al., 2004). Web services are interoperable and service-oriented architecture enables their interoperability. Interoperable protocol binding specifications for exchanging SOAP messages are inherent to web-services(W3C, 2004).

The collection of specifications enables a pool of vendors with their own implementation of these standards. This is good because the competition is limited to the implementation level as opposed to the standards-level. WS standards compliant vendors will enable a compliant based marketplace for distribute applications which would greatly support service oriented architectures. This would enable SOAs to consistently search and leverage services in a domain that are compliant with these standards.

The Description Language and Registry

While WSDL (Web Service Description Language) describes a service, a registry is a place where the location of WSDLs can be searched. There are two primary models for web-services registry (SunMicrosystems, 2003). UDDI and ebXML each target a specific information space. While UDDI focuses more on technical aspects when listing the service, ebXML focuses on business aspects more. In a nutshell, SOAP, WSDL and UDDI fall short in their abilities to automate ad-hoc B2B relationships and associated transactions. None are qualified to address the standardization of business processes, such as procurement process (SunMicrosystems, 2003).

The initial intent of UDDI was to create a set of public service directories that would enable and fuel the growth of B2B electronic commerce. Since the initial release of the UDDI specification, only a few public UDDI registries have been created. These registries are primarily used as test beds for web service developers.

Conclusion

Web-services in combination with service-oriented architecture have bridged the interoperability gap in the distributed computing space unlike any other technology in the past. Service-oriented architecture and web-services are a paradigm shift in the interoperability space because they are based on industry accepted standards and are simpler to implement across disparate software deployments. This technology is certainly here to stay.

Sunday, December 2, 2012

Speech Recognition in Automobiles

I wrote this in 2004 when I purchased a car with Voice Activated controls. It was amazing back then.

Speech Recognition in Automobiles

I am alone in my car cruising from Carmel, Indiana to Purdue University in West Lafayette, Indiana for a weekend class. It’s early in the morning and I wonder if I will make it to class on time. After about ten minutes on interstate 65, I ask impatiently “How long to the destination?” Honda’s advanced navigation system gears into action; it promptly queries the Global Positioning System (GPS) satellites and local GPS repeaters for the vehicle’s current co-ordinates. It then averages out the expected speed based on current averages on the interstate, state roads and inner streets and responds back in a pleasant natural female voice “It is about forty two minutes to the destination”. I am definitely going to be late for class.

Speech recognition technology, once a domain of fantastic science fiction, is a reality today. This technology has begun to touch our lives on a daily basis in our automobiles. A recent article (Rosencrance, 2004) reports on the speech recognition technology in Honda automobiles. The system has the ability to take drivers’ voice commands for directions and then respond with “voice-guided turn-by-turn instructions, so they don't have to take their hands off the wheel” (Rosencrance, 2004), said Alistair Rennie, vice president of sales for IBM's pervasive computing division. Rennie added that this “goes significantly beyond what was done before in terms of being able to deliver an integrated speech experience in a car” (Rosencrance, 2004).

Using IBM's Embedded ViaVoice software the system can recognize spoken street and city names across the continental United States (Rosencrance, 2004). The system recognizes almost every task a driver may want to accomplish while on the road. Commands that can operate the radio, compact disk (CD) player, climate control, defrost systems. It can recognize more than 700 commands and 1.7 million streets and city names. All this is possible without the driver looking away from the road.

clip_image002

(Figure 1)

“Display on” I prod along. The in-dash LCD screen lights up (see Figure 1). I glance at it for a second – there is a map of the state of Indiana and a symbol inching up north towards the destination - a red bull’s eye on the electronic map. I will get there soon. I say “XM Radio Channel twenty”. The integrated satellite radio starts up and plays high quality music.

Automobiles that leverage speech recognition technology are not only making vehicles more attractive to car buyers but also make the roads safer by allowing the driver to never have to take their eyes off the vehicles. Research conducted by the National Highway Traffic Safety Administration (NHTSA) found that automatic speech recognition (ASR) systems distracted drivers less than graphical user interfaces in vehicles performing the same function (Lee, Caven, Haake & Brown, n.d).

Not before long the speech system fades down the music volume and then articulates in the same pleasant voice “Exit approaching in two miles – stay to the right”. The ‘exit mile countdown’ goes on every half a mile until the car actually takes the exit. In about ten minutes I pull into the parking lot. I am running late by ten minutes – the class has probably begun and the exam papers probably handed out to the cohort. Before I turn off the engine, I finally ask, “Will I make a good grade?” There is no response from the system this time.

The Human-AI Partnership: Why Mastering Touch Typing is Your Next Generative AI Superpower

Generative AI requires humans to establish thought partnership rather than allow AI to take over critical thinking skills. I believe that ty...