Sunday, February 10, 2013

Non-repudiation–not a non-issue

To understand electronic non-repudiation, we must understand traditional non-repudiation from a legal perspective. The basis for a legal repudiation of a manual signature can pass only if the signature is a forgery, or an authentic signature was obtained via unconscionable conduct by a party to a transaction, fraud instigated by a third party, and undue influence exerted by a third party (McCullagh & Caelli, 2000).

From a technical perspective non-repudiation (NR) is basically proof that a certain principal sent or received the message in question. Every message exchange can be tied to a principal with a guarantee. An NR token is generated and verified that is sent by the principal – this way the principal cannot deny sending that message. In the same way, an NR token for a message received by the principal is created – this way the receipt of the message cannot be denied either.

The technical meaning of non-repudiation shifts the onus of proof from the recipient to the alleged signatory or entirely denies the signatory the right to repudiate a digital signature (McCullagh & Caelli, 2000). The use of a trusted system can solve the authentication, authorization and consequently non-repudiation issues by leveraging digital signatures.

Web-services. With more and more e-commerce being conducted on the Web and business-to-business transactions occurring, the importance of non-repudiation and digital signatures has gained a lot of importance. In the future, digital signatures will be commonly used in this area for providing non-repudiation services to the enterprise.

Sunday, February 3, 2013

Authorization–Legal Drinking Age ?

Authorization is the process by which valuable resources are protected and only limited access provided to principals who are authenticated. Principals are entities that request access to resources. Principals can be people or other servers. It is important to note that authorization can take place only when authentication of the principal has occurred previously. This makes sense because principals who are unable to prove their identity should not be given permission to access sensitive information.

http://whatisscotch.com/wp-content/uploads/2011/10/scotch-Whisky-Glass.jpg

Authorization in the Enterprise

In the enterprise environment access control comes in many flavors including discretionary, role-based, mandatory, and firewall types of access control. Discretionary authorization is the process by which two principal are given mutually exclusive access to the same resource. For example, principal A can be give read-only access to resource C while principal B can be given full access to the same resource. Usually such access control mechanisms are hierarchical in nature.

Access Control List

Discretionary access-control mechanisms typically maintain a list of principals and their associated permissions in an access control list (ACL). ACLs can be stored separately that can be accessed during the authentication or authorization process. Principals can also be parts of groups and have group access permissions applied. Role-base access control is applied when a usage role has to be applied across several principals. If there are multiple system users then a user group is created and a common ACL applied. Once the ACL is applied to the group, all principals that belong to the group automatically inherit the permissions too. It is still possible in most cases to override, overload or perform other polymorphic behaviors to user-permissions applied to principals. Applying access controls to security groups and principals works well in most cases.

Classifications. Classification levels may be used to specify authorization levels, in this scheme the resource, principal and groups are all supplied with a pre-defined authorization level, the level of comparative authorization defines the actual access roles. For example, if resource C is tagged as classified, resource D as unclassified, principal A as classified and principal B as unclassified then principal A can access both C & D while principal B can access only D. Such parallel hierarchies can determine the access logic with ease. In general, if a principal’s classification level is higher than that of the resource then the principal is given access to the resource.

Firewalls. Inter-network communications is often protected by a firewall in the enterprise. A firewall is a mechanism by which access to particular transport control protocol/Internet protocol (TCP/IP) ports on some network of computers is restricted based on the location of the incoming connection request. Firewalls are often a gateway that connects two or more networks. Rules can be applied to firewalls that can block certain ports, protocols and Internet protocol (IP) addresses from access the network. Proxy-servers are sometimes installed inside corporate networks that typically bypass the firewall.

Trust domains. Domains can be defined and be used to protect sensitive resources. This is accomplished by grouping all servers and processes that have the same access control policy into a domain. This trust-domain can interact at the micro level with a level of trust defined by the ACL. IP address with specific ports and communications can also be included in the domain as well. Security policy domains are also sometimes called realms.

Java technology. Java employs stringent security standards in the Java Virtual Machine (JVM), however when security domains are pre-defined, code can be executed over uniform resource locators (URL) within the trust-domains. Also multiple domains can be defined and trust at a certain level is defined, this way code executing in one domain can trust, and make useful calls to code running in another domain. The domain is thus called trusted domain. Sub-domains can be created and each sub-domain can have one or more parent domains. The partitioning of domains by creating sub-domains provides the ability to assign more restrictive permissions at the sub-domain level – but not higher access levels. Domains can also be federated; the federation of domains allows permissions to be assigned to domains and other sub-domains.

Auditing. Authorization requests can be logged by the servers, gateways and firewall. Audit logs can help isolate sequence of events of particular threads of events. Investigation of such type can be done in order to uncover any suspected authorization attempts into protected resources. A lot of information can be logged into the security log files. Typical information that is logged during a security audit is audit type events, timestamp of the event, identity of the principal requesting access, identification of the target source being requested, permission being requested on the target source, location from which the target source is being requested and any protocol-specific information respectively (Jaworski, Perrone & Chaganti, 2001). Due to the sensitive nature of security logs accessing security logs should be restricted to authorized principals.

Sunday, January 27, 2013

Protocols for the Security Stickler

Data communications channels are often insecure, subjecting messages transmitted over the channels to passive and active threats (Barkley, 1994). Internet protocols connect various networks and data packets are transmitted over them. An entire protocol stack exists over which computers exchange messages. For example, Web-browsers sent Hyper-text Markup Language (HTML) messages over the Hyper-text Transfer Protocol (HTTP) which sits on top of the TCP/IP stack. Additional protocols are now in places that create secure channels for such communication, Secure Sockets Layer (SSL) sits between the HTTP and TCP/IP protocols, so for secure Web-page transfers HTTP is transmitted over the standard port 443 of SSL rather than the unsecured port 80 assigned to HTTP. Together this results in HTTPS (HTTP over SSL) communication. Secure socket layer and TLS are security protocols primarily used for network transport of messages.

Secure Sockets Layer

The Secure Sockets Layer (SSL) protocol is a security protocol that provides communication privacy over the Internet by allowing client-server applications to communicate in a way that is designed to prevent eavesdropping, tampering or message forgery (Freier, Karlton & Kocher, 1996). SSL is composed of a handshake protocol and a record protocol, which typically sits on top of a reliable communication protocol like TCP. SSL evolved into its latest version 3.0 resulting in the transport layer security protocol.

Transport Layer Security

The primary goal of The Transport Layer Security (TLS) protocol is to provide privacy and data integrity between two communicating applications; this is used for encapsulation of various higher level protocols (Dierks & Allen, 1999, p. 3). The TLS is actually a combination of two layers, the TLS Record Protocol and the TLS Handshake Protocol. The TLS Record Protocol has two basic properties: connection privacy and reliability. The TLS Handshake protocol has three basic properties: peer identity authentication, shared secret negotiation, and negotiation reliability.

One advantage of TLS is that is independent of the application protocol (Dierks & Allen, 1999, p. 4). Higher-level protocols can be layered on top of this protocol. This leaves the decision of TLS initiation of handshaking and authentication certificate exchanges to the judgment of higher-level protocol designers. The primary goals of the TLS protocol, thus, are to provide cryptographic security, interoperability, and extensibility. These are fundamental requirements of enterprise security.

 

Sunday, January 20, 2013

Message Digests and Keys

A message digest is analogous to the hand signatures in the real world. Digests are a convenient and useful way of authenticating messages.

Web-o-pedia defines message digest as:

The representation of text in the form of a single string of digits, created using a formula called a one-way hash function. Encrypting a message digest with a private key creates a digital signature, which is an electronic means of authentication (p.1)

A message in its entirety is taken as input and a small fingerprint created, this message along with its unique fingerprint is sent with the document. When the recipient is able to verify the fingerprint of the document it ensures that the message did not change during transmission. A message may be sent in plain text along with a message digest in the same transmission. The idea is that the recipient would be able to verify that the plain text was not transmitted unaltered by examining the digital signature. The most popular algorithm for message digests is the MD5 (IrnisNet.com, n.d.). Created at Massachusetts Institute of Technology, it was published to public domain as Internet RFC 1321.

MD5

The MD5, developed by Dr. Roland R. Rivest, is an algorithm that takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input (Abzug, 1991). While not mathematically proven, it is conjectured that it is not feasible to create a message from the digest. In other words, it is computationally infeasible to “produce any message having a given pre-specified target message digest” (Abzug, 1991).

MD5 is described in the request for comment (rfc) 1321. Rivest (1992) summarized MD5 as:

The MD5 algorithm is an extension of the MD4 message-digest algorithm. MD5 is slightly slower than MD4, but is more "conservative" in design. MD5 was designed because it was felt that MD4 was perhaps being adopted for use more quickly than justified by the existing critical review; because MD4 was designed to be exceptionally fast, it is "at the edge" in terms of risking successful cryptanalytic attack. MD5 backs off a bit, giving up a little in speed for a much greater likelihood of ultimate security. (p.3)

Message digest 5 is an enhancement over MD4 – Rivest (1992) describes this version as more conservative as its predecessors and easier to codify the algorithm compactly. The algorithm provides a fingerprint of a message of any length. In order to come up with two messages (plain text) resolving to the exact same fingerprint is of the order 2 to the power of 64 operations. To reverse-engineer a fingerprint with a matching plain text message required 2 to the power of 128 operations. Such great numbers provide current computational infeasibility.

SHA-1

The Secure Hash Algorithm 1(SHA-1) algorithm is an advanced algorithm adopted by the United States of America as a Federal Information Processing Standard. SHA-1, as explained in the RFC 3174, is employed for computing a condensed representation of a message or a data file (Jones, 2001). This algorithm can accept a message of any length (theoretically less than 2 to the power of 64 bits); the output is a 160-bit message digest that is computationally unique to the input given. This signature can be used for validation against the previous signature.

Demonstration. For example, if the user registers with a password “purdue1234” the SHA-1 algorithm can be applied which will result in a 160-bit “8ad4d7e66116219c5407db13280de7b4c2121e23”. This digest can be saved in the database instead of the plain text password the user registers with. The next time the user signs on with the same plain-text password – it will get converted to the same signature which can then be compared to authenticate the user. If the user enters a different password say “rohit1234” the SHA-1 digests it as “fb0f57cb70fbd8926f2912585854cbe4bcf83942”. This triggers a mismatch and the authentication fails. The algorithm guarantees to generate the same 160-bit signature given the plain-text, and it is computationally infeasible to reverse the digest into the plain-text. Therefore even if the database is “hacked” the passwords will not be usable. This is one of the most common techniques employed in the industry for saving sensitive data that only needs to be verified and not reused.
DSA

Digital Signature Algorithm (DSA) is an algorithm inherited from the National Security Agency (NSA) and published by the National Institute of Standards and Technology (NIST) in the Digital Signature Standard (DSS) as part of the United States government’s Capstone project (RSA Laboratories, n.d.). In order to gain a better understanding of DSA, the discrete logarithm problem needs to be explained. RSA Laboratories documentation explains that for a group element g, if g is multiplied by itself n times, it is represented by gn ; the discrete logarithm problem is as follows: take two group elements g and h which belong to a finite group G, find an integer x such that gx=h. The discrete logarithm problem is a complex one, it is considered more complex and a harder one-way function than those algorithms that are based on the factoring problem.

Algorithm implementations that have emerged are quick with a big of “O(O(n))”. The big-O notation is a theoretical measure of the execution of an algorithm usually the time or memory needed given the problem size n, which is usually the number of item (NIST, 1976). Signature verification is faster than signature verification, whereas with the RSA algorithm the verification is much faster than the generation of the actual digest itself (RSA Laboratories, n.d.). Initial criticism of the algorithm surrounded around the lack of flexibility when compared with the RSA cryptosystem, the verification performance, adoption issues as cited by hardware and software vendors that had standardized on RSA, and finally the discretionary selection of the algorithm by NSA (RSA Laboratories, n.d.). DSA has now been incorporated by several specifications and implementations. This can now be considered a good choice for adoption by the enterprise.

Secret Keys

Two general types of cryptosystems have evolved over the decades: secret-key cryptography and public-key cryptography. In secret-key cryptography, as the name suggests, a key is maintained and kept secretive from the public domain, only the recipient and the sender have knowledge of the key. This is also known as symmetric key cryptography. In a public-key cryptography system, two keys play a role in ensuring security. The public key is well published or can be requested, the private key is kept secret by the individual parties. This scheme requires a Certificate Authority such that tampering of public keys is prevented. The primary advantage of this scheme over the other is that no secure courier is needed to transfer the secret key. The main disadvantage is that broadcasting of encrypted messages is not possible.

Symmetric Keys

This scheme is characterized by the use of one single key that can encrypt and decrypt the plain text message. The encryption and decryption algorithms now exist in the public domain, the only way this scheme can be used is by the knowledge of a key. If the key is known only to the parties that are in a secured communication mode, secrecy can be provided (Barkley, 1994). When symmetric key cryptography is used for communications and the messages are intercepted by a hacker, it is computationally infeasible to derive the key or decrypt the message from the cipher even if the encryption algorithm is known. The cipher can only be decrypted if the secret key is known. Because the secret key is known only by the message sender and the message receiver, the secrecy of the transmission can be guaranteed.

MAC. While secrecy can be guaranteed the integrity of the message cannot be guaranteed. In order to ensure that the message has integrity, a cryptographic checksum called the Message Authentication Code (MAC) is appended to the message. A MAC is a type of message digest, it is smaller than the original message, a MAC cannot be reverse engineered, and colliding messages are hard to find. The MAC is computed as a function of the message being transmitted and the secret key (Barkley, 1994). This is done by the message originator or the sender.

Asymmetric Keys

Asymmetric key cryptography is different in the sense that there is only one key that is well known to both parties and another set of keys that is private. This scheme is also known as public-key cryptography. The public key is used to generate a function that transforms text (Barkley, 1994). The private key is secret and is known only to the parties who own their respective public keys. The public keys are meant to be distributed. Both the keys are part of a pair and either one can be deemed public and the other private. Each key generates a transformation function, because the public key is known its transformation can be derived and be made known also. In addition, the functions have an inverse relationship. If one function encrypts a message the other can be used to decrypt it (Barkley, 1994). How these transformation functions are used is as follows: the public key of the destination is requested, the sender uses the public key of the destination and transforms the data to be transmitted using it. The sender then transmits the encrypted data to the desired sender. Note that the transmission of the data is encrypted and can only be decrypted by the other pair of the public key that was used. The private key of the receiver can decrypt the message. The receiver uses the private key after receiving the encrypted message and then uses it to decrypt the message, after which the message can be consumed.

The advantage of such a scheme is that two users can communicate with each other without having to share a common key; usually with symmetric key cryptography a common key is saved. The common key which is usually a secret key is not something that should be shared in the first place. Also, distribution of secret keys adds to the layer of complexity associated with the security of the system. Using public-key cryptography this issue is easily resolved. Because it is computationally infeasible for the private key to be derived from the public key, it is also, therefore, infeasible to decrypt the message encrypted with the public key. While there is convenience there is an issue with the inefficiency of the mechanism. The time taken to complete the encryption of plain text can take a long time; also the length of the cipher text can be longer than the plain text message itself. Also, distribution of messages is not possible because the private key is held by only one principal. Therefore it is not possible to use this scheme for encrypted broadcasts. Applications for public-key cryptography are often seen in the enterprise: authentication, integrity and non-repudiation.

Sunday, January 13, 2013

Cryptography–It’s the Key

Julius Caesar encrypted messages so that the messenger could not understand the cipher (Faqs.org, 2003). A “shift by 3” function was used i.e. he substituted A by D, Z by C etc. Only the recipient, who knew the key, three in this case, could decipher the message. A cipher system is a way of disguising messages such that only the recipients with the knowledge of the ‘key’ can decipher it. Cryptography is the art of using cipher/crypto systems. Cryptanalysis is the art of deciphering the encrypted message without prior knowledge of the key means other than the intended.

http://users.telenet.be/d.rijmenants/pics/codewheel.gif

A strong cryptosystem has a large key space, it will certainly produce cipher text which appears random to all standard statistical tests and it will resist all known previous attacks (Faqs.org, 2003). Several types of cryptography and standards exist today. Public Key Cryptography Standards (PKCS) is an important security standard, it defines a binary format that can be used for storing certificates. Public key cryptography and shared key cryptography can also use message digests – this is a one-way has function.

Sunday, January 6, 2013

Authentication–Who Are You ?

Authentication

Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be (SearchSecurity.com, n.d.). Verifying an identity claim is more complex than it appears to be upfront. There are several authentication methodologies, several security protocols, encryption schemes and hashing algorithms. There is no “best” security solution. For every implementation, it is important to establish the best possible options available.

Authentication Types

Authentication has existed well into the history of ancient human civilization. In the enterprise environment, it is increasingly becoming important that the authentication architecture is well defined. It is common for the user to enter authentication credentials. Several types of authentication methods exist today. Entities can be authenticated based on secret knowledge (like username and password combination), biometric (like fingerprint scans), and digital certificates. In private and public computer networks (including the Internet), authentication is commonly done through the use of logon passwords (Faqs.org, 2003).

Knowledge-based authentication

The most common authentication type is the knowledge-based. An identification key along with a secret pass code is required to access systems protected by such an authentication scheme. A user-id and password challenge screen is commonly seen in web-based email systems. The knowledge of the password is considered secret and is considered enough information to let the user in.

Both ends of a session must have the secret password (and/or key) in order for authentication to take place. The password also needs to be transmitted from the principal’s location to the principal authenticator’s location (Jaworski, Perrone & Chaganti, 2001). This leads to an obvious exposure. The link between the locations needs to be secured such that snoop attempts are not possible or data deciphering infeasible. One way of securing such systems is by the use of Kerberos. This is a password-base authentication system where a secret symmetric key is used to cipher and decipher passwords.

Biometrics-based authentication

Authentication based on biometrics is still in its infancy. Unique attributes extracted from individuals are used for authentication. Fingerprints, hand geometry, facial recognition, iris recognition, and dynamic signature verification are some of the more prominent biometric technologies. Biometrics by themselves is not fool-proof technologies, there are several potential ways to hack into such systems, and this risk presents additional concern relative to privacy protection. While research is in progress for revocable biometric tokens, this technology is not commercially implemented on a mass scale yet.

Certificate-based authentication

This technique has grown in popularity in recent years. What is a certificate? A certificate is just data that identifies a principal. Important information contained in the certificate is the public key of the principal, the validity dates of the certificate and the digital signature issued by the certified issuer (Jaworski, Perrone & Chaganti, 2001). The signer uses its private key to generate a cipher text called signature from a block of plain text. This cipher can only be decrypted using the signer’s public key – this ensures that the signature was actually signed by the signer, because the private key, as the name suggests, is secret.

This technology has significant advantages when encrypted plain text needs to be sent across otherwise unsecured connections. On a client-server architecture enabled with both server-side and client-side certificates, both parties can send encrypted information which each side knows came from the other. This is because only the public keys can decrypt the information that the private key encrypted.

A well-know certificate, often called the root or Certificate Authority (CA), signs the server’s public key with its private key. This way no hacker can create a false public key and pretend to communicate with signed data with the assumed keys. The distribution of the root public keys is done in a secure fashion; browsers come pre-configured with these keys.

Several certificate implementations have evolved, most significantly the X.509 v3 standard. This standard allows several different algorithms to be used for creating digital signatures. The X.509 contains information about the version of the certificate, the serial number information, information identifying the signature algorithm and its parameter, the CA identity and signature, the dates of validity and the principal identity and public key.

Sunday, December 30, 2012

Guaranteed Integrity of Messages

The ability to guarantee the integrity of a document and the authentication of the sender has been highly desirable since the beginning of human civilization. Even today, we are constantly challenged for authentication in the form of picture identification, personal hand signature and finger prints. Organizations need to ensure authentication of the individual and other corporations before they conduct business transactions with them.

http://www.gfi.com/blog/wp-content/uploads/2009/05/security-integrity-availability-confidentiality.jpg

When human contact is not possible, the challenge of authentication and consequently authorization increases. Encryption technologies, especially public-key cryptography provide a reliable way to digitally sign documents. In today’s digital economies and global networks digital signatures play a vital role in information security.

Sunday, December 23, 2012

Security–the most important Quality Attribute

While digital signatures and encryption are old technologies, their importance is renewed with the rapid growth of the Internet. Online business transactions have been growing at a rapid pace. More and more money transactions occur electronically and over the Internet. Non-repudiation is important when personal contact is not possible. Digital signatures serve that purpose. Encryption ensures that information sent for the intended party can only be read, unaltered by that party. Several technologies support encryption.

The enterprise security model consists of domains that get protection from resources not permitted to access or execute functions. There is a clear distinction between authorizing a resource and authenticating a resource. When a person shows a driver’s license at the bar before he gets a drink, the bar tender will look at it and compare his photograph with the actual person presenting it. This is authentication. When he checks the date of birth for legal drinking age, he has authorized the requester for the drink.

In the corporate environment, it is exceedingly important that the same form of authentication and authorization take place digitally. With new business channels open on the Internet, web applications deployed on the intranet for employees, and business-to-business (B2B) commerce channels created on the extranet, millions of dollars worth of transactions occur.

Business critical information is passed on the wire between computers, which if exposed to the general public or in the wrong hands could be disastrous to the company in question. For every business that exists there is a threat to the business. For e-business initiatives the anonymity of the network, especially the Internet, brings new threats to information exchange. It is important that information is exchanged secretly and confidently.

DSV and Custody Chaining

Dynamic signature verification (DSV) is the process by which an individual’s signature is authenticated against a known signature pattern. Dynamics of the process of creating a signature is initially enrolled into the authenticating system, which is then used to compare the future signature patterns. Several factors including speed, pressure, acceleration, velocity and size ratios are taken into account. These measurements are then digitized and stored for comparison later.
Signatures have long been used to authenticate documents in the real world, before the technology wave, signatures, seals and tamper-proof envelopes were used for secure and valid message exchange. With the onset of technology and digital document interchange, a growing need for authenticating digital documents has emerged.
Digital signatures had emerged in the 1970s as a means of developing a cipher of fixed length from an input of theoretically unlimited length. The signature is expected to be collision free and computationally infeasible to reverse into the original document. Both handwritten signatures and digital signatures have to comply with the basic requirements of authenticity, integrity, and non-repudiation (Elliott, Sickler, Kukula & Modi, n.d.).
In the information technology departments of corporations, documents are regularly exchanged between teams, companies, out sourced contract workers, internal consultants and executive management. These documents are often confidential and contain company secrets. However, due to resource constraints such documents are often shared with consultants and contract workers.

It is therefore a viable solution to provide digital signatures on those documents using proper authentication protocols. One way this could be achieved would be through dynamic signature verification. An interface that can create unique digital signatures from the physical dynamic signature and apply it to the electronic document would be ideal.
The requirement of a verifiable trusted signature creation technique for enterprise-wide document collaboration is required. DSV is an ideal technology suited for this purpose. Sensitive documents can be signed using a DSV module which can electronically sign the e-document. The document can be then shared with confidence that it has not been altered in transit and the recipient will be able to trust it.





Sunday, December 16, 2012

Fingerprinting and Biometrics at Airports

I was unpleasantly surprised to see longer than usual lines at the international port of entry at O’Hare this February. My flight connected me to O’Hare International at Chicago from Schiphol Airport at Amsterdam, Netherlands. It was a long flight and it wasn’t apparent to me the reason for the delay in processing passengers. A huge line of people with hand luggage zigzagged what appeared to be a large hall, the end of the line fading in the distance. I was tired and wanted to get to my apartment and I did not believe I would ever get there at this rate.

In a 2004 article published on New Scientist, Will Knight reports that the Department of Homeland Security (DHS) initiated the installation of a fingerprinting system. A total of 115 airports have the biometric security equipment installed (Knight, 2004).A DHS officer made the comment to Knight that “it takes each finger scan takes just three seconds and pilot schemes produced just one error in every thousand checks” (Knight, 2004).

http://eyetrackingupdate.com/wp-content/uploads/2011/02/digital-fingerprint-scan-300x300.jpg

The early morning long lines brought back memories of the traditional waits outside the U.S. Consulate general in India where the visas are issued. It is said that heat, rain nor storms get in the way of ticket seekers to paradise itself – the United States of America. Visa applicants are happy to divulge their fingerprint for an entry permit into the USA.

Knight (2004) cites Bruce Schneier, founder of the US security consultancy firm Counterpane, who believes that gathering more information through this method is only collecting more data while the problem with security lays in a lack of intelligence not the amount of data . Schneier believes that there is enough data already available but not enough intelligence to process it. He goes on to explain that the terrorists who crashed airplanes into buildings on September 11,2001 had valid passports and were not on previous terrorist watch-lists.

The U.S. immigration officer asked me to wet my left and right index finger and place it on the fingerprint sensor, just like the Visa officer had asked me to do in India. The visa had been issued at the end of the day – a very long day. There was a camera placed along with the fingerprint sensor. No pictures were taken in either place. I placed my finger, the immigration officer instructed me to wait. The computer system looked up my fingerprint compared it with their databases in what seemed like an eternity. Finally, the immigration office smiled back at me and let me proceed. I still had to go to baggage collection and customs; I feared more divulgence of impressions from body parts. Thankfully there were none. After ninety minutes of baby-steps through the immigration lines and multi-finger scans at the Chicago O’Hare airport I was free to step into the “Land of the free, home of the brave”.

The Human-AI Partnership: Why Mastering Touch Typing is Your Next Generative AI Superpower

Generative AI requires humans to establish thought partnership rather than allow AI to take over critical thinking skills. I believe that ty...