Blockchain and P2P network analysis for valid and verified data distribution in supercomputers using cryptography channels and mobile computing ( forward reverse into circlechain ) # Manguntam Check Point Electronics Network system ( Electronic Hardwarwe + Electronic Software + Electronic Brainware = AI + Generative AI = Electronic Network system good point )

 

     
  
     

   
  

          
                            Cryptography 
                            ===========

Introduction:-Cryptography is the science of using mathematics to encrypt and decrypt data. Cryptography enables you to store sensitive information or transmit it across insecure networks (like the Internet) so that it cannot be read by anyone except the intended recipient.

While cryptography is the science of securing data, cryptanalysisis the science of analyzing and breaking secure communication. Classical cryptanalysis involves an interesting combination of analytical reasoning, application of mathematical tools, pattern finding, patience, determination, and luck. Cryptanalysts are also called attackers.

Cryptology embraces both cryptography and cryptanalysis.

Cryptography can be strong or weak. Cryptographic strength is measured in the time and resources it would require to recover the plaintext. The result of strong cryptography is cipher text that is very difficult to decipher without possession of the appropriate decoding tool. How difficult? Given all of today’s computing power and available time—even a billion computers doing a billion checks a second—it is not possible to decipher the result of strong cryptography before the end of the universe.

When Julius Caesar sent messages to his generals, he didn't trust his messengers. So he replaced every A in his messages with a D, every B with an E, and so on through the alphabet. Only someone who knew the “shift by 3”rule could decipher his messages. And so we begin.

Encryption and decryption

Data that can be read and understood without any special measures is called plaintext or clear text. The method of disguising plaintext in such a way as to hide its substance is called encryption. Encrypting plain text results in unreadable gibberish called cipher text. You use encryption to ensure that information is hidden from anyone for whom it is not intended even those who can see the encrypted data. The process of reverting cipher text to its original plaintext is called decryption.

 

                  Plain text                          Encryption                                      cipher text                           Decryption                                  Plaintext

Encryption and decryption

 

How does cryptography work?

A cryptographic algorithm, or cipher, is a mathematical function used in the encryption and decryption process. A cryptographic algorithm works in combination with a key—a word, number, or phrase—to encrypt the plain text. The same plaintext encrypts to different cipher text with different keys. The security of encrypted data is entirely dependent on two things: the strength of the cryptographic algorithm and the secrecy of the key.

A cryptographic algorithm, plus all possible keys and all the protocols that make it work comprise a cryptosystem. 

 

 

Introduction:-In conventional cryptography, also called symmetric-key encryption, one key is used both for encryption and decryption.

Plaintext                            Encryption                                           Cipher text                           Decryption                                         Plaintext

 

Conventional encryption

 

Caesar’s Cipher-An extremely simple example of conventional cryptography is a substitution cipher. A substitution cipher substitutes one piece of information for another. This is most frequently done by offsetting letters of the alphabet .Offset the alphabet and the key is the number of characters to offset it.

For example, if we encode the word “SECRET” using Caesar’s key value of 3, we offset the alphabet so that the 3rd letter down (D) begins the alphabet. So starting with

ABCDEFGHIJKLMNOPQRSTUVWXYZ

And sliding everything up by 3, you get

DEFGHIJKLMNOPQRSTUVWXYZABC

Where D=A, E=B, F=C, and so on.

Using this scheme, the plaintext, “SECRET” encrypts as “VHFUHW.” To allow someone else to read the ciphertext, you tell them that the key is 3.

Obviously, this is exceedingly weak cryptography by today’s standards, but it worked for Caesar, and it illustrates how conventional cryptography works. 

Introduction:-Conventional encryption has benefits. It is very fast. It is especially useful for encrypting data that is not going anywhere. However, conventional encryption alone as a means for transmitting secure data can be quite expensive simply due to the difficulty of secure key distribution.

Recall a character from your favorite spy movie: the person with a locked briefcase handcuffed to his or her wrist. What is in the briefcase, anyway? It’s probably not the missile launch code/bio toxin formula/invasion plan itself.

It’s the key that will decrypt the secret data.

For a sender and recipient to communicate securely using conventional encryption, they must agree upon a key and keep it secret between themselves. If they are in different physical locations, they must trust a courier, the Bat Phone, or some other secure communication medium to prevent the disclosure of the secret key during transmission. Anyone who overhears or intercepts the key in transit can later read, modify, and forge all information encrypted or authenticated with that key. The persistent problem with conventional encryption is key distribution: how do you get the key to the recipient without someone intercepting it?

The problems of key distribution are solved by public key cryptography.

Public key cryptographyis an asymmetric scheme that uses a pair of keys for encryption: a public key, which encrypts data, and a corresponding private, or secret key for decryption. You publish your public key to the world while keeping your private key secret. Anyone with a copy of your public key can then encrypt information that only you can read. Even people you have never met.

It is computationally infeasible to deduce the private key from the public key. Anyone who has a public key can encrypt information but cannot decrypt it. Only the person who has the corresponding private key can decrypt the information.

 

 

                                                 Public key                                                                                             Private Key

 

 

The primary benefit of public key cryptography is that it allows people who have no preexisting security arrangement to exchange messages securely. The need for sender and receiver to share secret keys via some secure channel is eliminated; all communications involve only public keys, and no private key is ever transmitted or shared. 

Introduction:-A key is a value that works with a cryptographic algorithm to produce a specific cipher text. Keys are basically very big numbers. Key size is measured in bits; the number representing a 1024-bit key is darn huge. In public key cryptography, the bigger the key, the more secure the cipher text.

However, public key size and conventional cryptography’s secret key size are totally unrelated. A conventional 80-bit key has the equivalent strength of a1024-bit public key. A conventional 128-bit key is equivalent to a 3000-bit public key. Again, the bigger the key, the more secure, but the algorithms used for each type of cryptography are very different and thus comparison is like that of apples to oranges.

While the public and private keys are mathematically related, it’s very difficult to derive the private key given only the public key; however, deriving the private key is always possible given enough time and computing power. This makes it very important to pick keys of the right size; large enough to be secure, but small enough to be applied fairly quickly. Additionally, you need to consider who might be trying to read your files, how determined they are, how much time they have, and what their resources might be.

Larger keys will be cryptographically secure for a longer period of time. If what you want to encrypt needs to be hidden for many years, you might want to use a very large key. Of course, who knows how long it will take to determine your key using tomorrow’s faster, more efficient computers? There was a time when a 56-bit symmetric key was considered extremely safe.

Keys are stored in encrypted form. PGP stores the keys in two files on your hard disk; one for public keys and one for private keys. These files are called keyrings. As you use PGP, you will typically add the public keys of your recipients to your public keyring. Your private keys are stored on your private keyring. If you lose your private keyring, you will be unable to decrypt any information encrypted to keys on that ring. 

Introduction:-PGP is a hybrid cryptosystem. It combines some of the best features of both conventional and public key cryptography. When a user encrypts plaintext with PGP, PGP first compresses the plaintext. Data compression saves modem transmission time and disk space and, more importantly, strengthens cryptographic security. Most cryptanalysis techniques exploit patterns found in the plaintext to crack the cipher. Compression reduces these patterns in the plaintext, thereby greatly enhancing resistance to cryptanalysis. (Files that are too short to compress or which don’t compress well aren’t compressed.)

PGP then creates a session key, which is a one-time-only secret key. This key is a random number generated from the random movements of your mouse and the keystrokes you type. This session key works with a very secure, fast conventional encryption algorithm to encrypt the plaintext; the result is Cipher text. Once the data is encrypted, the session key is then encrypted to the recipient’s public key. This public key-encrypted session key is transmitted along with the cipher text to the recipient.

How PGP encryption works?

Decryption works in the reverse. The recipient’s copy of PGP uses his or her private key to recover the temporary session key, which PGP then uses to decrypt the conventionally-encrypted cipher text.

How PGP decryption works?

The combination of the two encryption methods combines the convenience of public key encryption with the speed of conventional encryption. Conventional encryption is about 1,000 times faster than public key encryption. Public key encryption in turn provides a solution to key distribution and data transmission issues. Used together, performance and key distribution are improved without any sacrifice in security.

This is perhaps the simplest approaches of encryption/decryption using elliptic curves. The first task in this system is to encode the plaintext message m to be sent as an x-y point Pm. It is the point Pm that will be encrypted as a ciphertext and subsequently decrypted. But we cannot simply encode the message as the x or y coordinate of a point.

As with the key exchange system, an encryption/decryption system requires a point G and an elliptic group Eq(a, b) as parameters. Each user A selects a private key nA and generates a public key PA = nA x G.

To encrypt and send a message Pm to B, A chooses a random positive integer k and produces the ciphertext Cm consisting of the pair of points:

Cm = {kG, Pm kPB}

A has used B's public key PB. To decrypt the ciphertext, B multiplies the first point in the pair by B's secret key and subtracts the result from the second point:

Pm kPBnB(kG) = Pm k(nBG) nB(kG) = Pm

A has masked the message Pm by adding kPB to it. Nobody but A knows the value of k, so even though PB is a public key, nobody can remove the mask kPB. However, A also includes a "clue," which is enough to remove the mask if one knows the private key nB. For an attacker to recover the message, the attacker would have to compute k given G and kG, which is assumed . 


Introduction:-A major benefit of public key cryptography is that it provides a method for employing digital signatures. Digital signatures enable the recipient of information to verify the authenticity of the information’s origin, and also verify that the information is intact. Thus, public key digital signatures provide authentication and data integrity.

 A digital signature also provides non-repudiation, which means that it prevents the sender from claiming that he or she did not actually send the information. These features are every bit as fundamental to cryptography as privacy, if not more.

A digital signature serves the same purpose as a handwritten signature. However, a handwritten signature is easy to counterfeit. A digital signature is superior to a handwritten signature in that it is nearly impossible to counterfeit, plus it attests to the contents of the information as well as to the identity of the signer. Some people tend to use signatures more than they use encryption.

For example, you may not care if anyone knows that you just deposited $1000 in your account, but you do want to be darn sure it was the bank teller you were dealing with.

The basic manner in which digital signatures are created is illustrated in Figure. Instead of encrypting information using someone else’s public key, youencrypt it with your private key. If the information can be decrypted with yourpublic key, then it must have originated with you.

 

This is perhaps the simplest approaches of encryption/decryption using elliptic curves. The first task in this system is to encode the plaintext message m to be sent as an x-y point Pm. It is the point Pm that will be encrypted as a ciphertext and subsequently decrypted. But we cannot simply encode the message as the x or y coordinate of a point.

As with the key exchange system, an encryption/decryption system requires a point G and an elliptic group Eq(a, b) as parameters. Each user A selects a private key nA and generates a public key PA = nA x G.

To encrypt and send a message Pm to B, A chooses a random positive integer k and produces the ciphertext Cm consisting of the pair of points:

Cm = {kG, Pm kPB}

A has used B's public key PB. To decrypt the ciphertext, B multiplies the first point in the pair by B's secret key and subtracts the result from the second point:

Pm kPBnB(kG) = Pm k(nBG) nB(kG) = Pm

A has masked the message Pm by adding kPB to it. Nobody but A knows the value of k, so even though PB is a public key, nobody can remove the mask kPB. However, A also includes a "clue," which is enough to remove the mask if one knows the private key nB. For an attacker to recover the message, the attacker would have to compute k given G and kG, which is assumed hard . 


Introduction:-One issue with public key cryptosystems is that users must be constantly vigilant to ensure that they are encrypting to the correct person’s key. In an environment where it is safe to freely exchange keys via public servers, man-in-the-middle attacks are a potential threat.

In this type of attack, someone posts a phony key with the name and user ID of the user’s intended recipient. Data encrypted to— and intercepted by—the true owner of this bogus key is now in the wrong hands. In a public key environment, it is vital that you are assured that the public key to which you are encrypting data is in fact the public key of the intended recipient and not a forgery. You could simply encrypt only to those keys which have been physically handed to you. But suppose you need to exchange information with people you have never met; how can you tell that you have the correct key?

Digital certificates,or certs, simplify the task of establishing whether a public key truly belongs to the purported owner. A certificate is a form of credential. Examples might be your driver’s license, your social security card, or your birth certificate. Each of these has some information on it identifying you and some authorization stating that someone else has confirmed your identity. Some certificates, such as your passport, are important enough confirmation of your identity that you would not want to lose them, lest someone use them to impersonate you.

A digital certificate is data that functions much like a physical certificate. A digital certificate is information included with a person’s public key that helps others verify that a key is genuine or valid. Digital certificates are used to thwart attempts to substitute one person’s key for another.

A digital certificate consists of three things:

• A public key.

• Certificate information. (“Identity” information about the user, such as name, user ID, and so on.)

• One or more digital signatures.

The purpose of the digital signature on a certificate is to state that the certificate information has been attested to by some other person or entity. The digital signature does not attest to the authenticity of the certificate as a whole; it vouches only that the signed identity information goes along with, or isbound to, the public key.

Thus, a certificate is basically a public key with one or two forms of ID attached, plus a hearty stamp of approval from some other trusted individual. 

Introduction:-Security architecture for OSI offers a systematic way of defining security requirements and characterizing the approaches to achieve these requirements. It was developed as an international standard.

Need for OSI Security Architecture:

1. To assess the security needs, of an organization effectively and choose various security products and policies.

2. The need for some systematic way of defining the requirements for security and characterizing the approaches to satisfied those requirements.

3.  This is difficult enough in a centralized data-processing environment, and with the use of local area and wide area network, the problems are compounded.

The OSI Security Architecture:

Such a systematic approach is defined by ITU-T (The International Telecommunication Union- Telecommunication Standardization Sector).

It is a United Nation (UN) sponsored agency that develops standards, called Recommendations, relating to telecommunication and to Open System Interconnection (OSI)) Recommendations X.800, security Architecture for OSI.

Benefits:

1.The OSI security architecture is useful to managers as way of organization the task of providing security.

2. Furthermore, because this architecture was developed as international standards, computer and communications vendors have developed security feature for their products and services that relate to this structured definition of services and mechanisms.

The OSI security architecture focuses on security attack, mechanism, and services. These can be defined briefly as follows:

  • Security Attack:Any action that compromise the security of information owned by an organization.
  • Security Mechanism:A process that is designed to detect, prevent or recover from a security attack. And security mechanism is a method which is used to protect your message from unauthorized entity.
  • Security Services:Security Services is the services to implement security policies and implemented by security mechanism.

Services:

Confidentiality: Ensures that the information in a computer system and transmitted information are accessible only for reading by authorized parties.

Authentication: ensures that the origin of a message or electronic document is correctly identified, with an assurance that the identity is not false.

Integrity: ensures that only authorized parties are able to modify computer system assets and transmitted information.

_ Non-repudiation: requires that neither the sender not the receiver of a message be able to deny the transmission.

Access control: requires that access to information resources may be controlled by or for the target system.

Availability: requires that computer system assets be available to authorized parties when needed.

 Introduction:-A security policy defines what people can and can't do with network components and resources.

Need for Network Security

In the past, hackers were highly skilled programmers who understood the details of computer communications and how to exploit vulnerabilities. Today almost anyone can become a hacker by downloading tools from the Internet. These complicated attack tools and generally open networks have generated an increased need for network security and dynamic security policies.

The easiest way to protect a network from an outside attack is to close it off completely from the outside world. A closed network provides connectivity only to trusted known parties and sites; a closed network does not allow a connection to public networks.

Because they have no Internet connectivity, networks designed in this way can be considered safe from Internet attacks. However, internal threats still exist.

There is a estimates that 60 to 80 percent of network misuse comes from inside the enterprise where the misuse has taken place.

With the development of large open networks, security threats have increased significantly in the past 20 years. Hackers have discovered more network vulnerabilities, and because you can now download applications that require little or no hacking knowledge to implement, applications intended for troubleshooting and maintaining and optimizing networks can, in the wrong hands, be used maliciously and pose severe threats.

A person that is interested in attacking your network; his motivation can range from gathering or stealing information, creating a DoS, or just for the challenge of it. 

Introduction:-A useful means of classifying security attacks is in terms of passive attacks and active attacks. A passive attack attempts to learn or make use of information from the system but does not affect system resources. An active attack attempts to alter system resources or affect their operation.

 


Active attacks

Active attacks involve some modification of the data stream or the creation of a false stream and can be subdivided into four categories: masquerade, replay, modification of messages, and denial of service.

masquerade takes place when one entity pretends to be a different entity. A masquerade attack usually includes one of the other forms of active attack. For example, authentication sequences can be captured and replayed after a valid authentication sequence has taken place, thus enabling an authorized entity with few privileges to obtain extra privileges by impersonating an entity that has those privileges.

Modification of messagessimply means that some portion of a legitimate message is altered, or that messages are delayed or reordered, to produce an unauthorized effect. For example, a message meaning "Allow John Smith to read confidential file accounts" is modified to mean "Allow Fred Brown to read confidential file accounts."

The denial of service prevents or inhibits the normal use or management of communications facilities. This attack may have a specific target; for example, an entity may suppress all messages directed to a particular destination (e.g., the security audit service). Another form of service denial is the disruption of an entire network, either by disabling the network or by overloading it with messages so as to degrade performance. 




Introduction:-

  • The goal of a denial of service attack is to deny legitimate users access to a particular resource.
  • An incident is considered an attack if a malicious user intentionally disrupts service to a computer or network resource.
  • Resource exhaustion (consume all bandwidth, disk space).

Types of attacks

• There are three general categories of attacks.

  1. Against users
  2. Against hosts
  3. Against networks

Network Based Denial of Service Attacks

• UDP bombing

• TCP SYN flooding

• Ping of death

• Smurf attack

 Most involve either resource exhaustion or corruption of the operating system runtime environment.

UDP bombing

• Two UDP services: echo (which echo’s back any character received) and chargen (which generates character) were used in the past for network testing and are enabled by default on most systems.

• These services can be used to launch a DOS by connecting the chargen to echo ports on the same or another machine and generating large amounts of network traffic.

TCP SYN Flooding

• Also referred to as the TCP “half-open” attack.

• To establish a legitimate TCP connection:

  1. The client sends a SYN packet to the server
  2. The server sends a SYN-ACK back to the client
  3. The client sends an ACK back to the server to complete the three-way handshake and establish the connection.

• The attack occurs by the attacker initiating a TCP connection to the server with a SYN. (using a legitimate or spoofed Source address)

• The server replies with a SYN-ACK

• The client then doesn’t send back a ACK, causing the server to allocate memory for the pending connection and wait.

(If the client spoofed the initial source address, it will never receive the SYN-ACK).

TCP SYN Flooding: Results

• The half-open connections buffer on the victim server will eventually fill

• The system will be unable to accept any new incoming connections until the buffer is emptied out.

• There is a timeout associated with a pending connection, so the half-open connections will eventually expire.

• The attacking system can continue sending connection requesting new connections faster than the victim system can expire the pending connections.

TCP SYN Flooding: Countermeasures

• Apply vendor’s patches.

 • Install Ingress/Egress router filters to prevent some IP spoofing locally

Ping of Death

• The TCP/IP specification allows for a maximum packet size of 65,536 octets.

• The ping of death attack sends oversized ICMP datagrams (encapsulated in IP packets) to the victim.

• Some systems, upon receiving the oversized packet, will crash, freeze, or reboot, resulting in denial of service.

• Countermeasures: Most systems are now immune, but apply vendor patches if needed.


Introduction:-

  • The goal of a denial of service attack is to deny legitimate users access to a particular resource.
  • An incident is considered an attack if a malicious user intentionally disrupts service to a computer or network resource.
  • Resource exhaustion (consume all bandwidth, disk space).

Types of attacks

• There are three general categories of attacks.

  1. Against users
  2. Against hosts
  3. Against networks

Network Based Denial of Service Attacks

• UDP bombing

• TCP SYN flooding

• Ping of death

• Smurf attack

 Most involve either resource exhaustion or corruption of the operating system runtime environment.

UDP bombing

• Two UDP services: echo (which echo’s back any character received) and chargen (which generates character) were used in the past for network testing and are enabled by default on most systems.

• These services can be used to launch a DOS by connecting the chargen to echo ports on the same or another machine and generating large amounts of network traffic.

TCP SYN Flooding

• Also referred to as the TCP “half-open” attack.

• To establish a legitimate TCP connection:

  1. The client sends a SYN packet to the server
  2. The server sends a SYN-ACK back to the client
  3. The client sends an ACK back to the server to complete the three-way handshake and establish the connection.

• The attack occurs by the attacker initiating a TCP connection to the server with a SYN. (using a legitimate or spoofed Source address)

• The server replies with a SYN-ACK

• The client then doesn’t send back a ACK, causing the server to allocate memory for the pending connection and wait.

(If the client spoofed the initial source address, it will never receive the SYN-ACK).

TCP SYN Flooding: Results

• The half-open connections buffer on the victim server will eventually fill

• The system will be unable to accept any new incoming connections until the buffer is emptied out.

• There is a timeout associated with a pending connection, so the half-open connections will eventually expire.

• The attacking system can continue sending connection requesting new connections faster than the victim system can expire the pending connections.

TCP SYN Flooding: Countermeasures

• Apply vendor’s patches.

 • Install Ingress/Egress router filters to prevent some IP spoofing locally

Ping of Death

• The TCP/IP specification allows for a maximum packet size of 65,536 octets.

• The ping of death attack sends oversized ICMP datagrams (encapsulated in IP packets) to the victim.

• Some systems, upon receiving the oversized packet, will crash, freeze, or reboot, resulting in denial of service.

• Countermeasures: Most systems are now immune, but apply vendor patches if needed. 

Introduction:-

• Attacker logs into Master and signals slaves to launch an attack on a specific target address (victim).

• Slaves then respond by initiating TCP, UDP, ICMP or Smurf attack on victim.

Distributed Denial of Service Attacks (DDoS)

• Trin00 (WinTrinoo)

• Tribe Flood Netowrk (TFN) (TFN2k)

• Shaft

• Stacheldraht

• Mstream

DDOS: Countermeasures

• Sends out packets and listens for reply

• Detects Trinoo, TFN, Stacheldrahtfind_ddos tool

• Runs on local system

• Detects Trinoo, TFN, TFN2k

– Bindview’s Zombie Zapper

• Tells DDOS slave to stop flooding traffic

This is perhaps the simplest approaches of encryption/decryption using elliptic curves. The first task in this system is to encode the plaintext message m to be sent as an x-y point Pm. It is the point Pm that will be encrypted as a ciphertext and subsequently decrypted. But we cannot simply encode the message as the x or y coordinate of a point.

As with the key exchange system, an encryption/decryption system requires a point G and an elliptic group Eq(a, b) as parameters. Each user A selects a private key nA and generates a public key PA = nA x G.

To encrypt and send a message Pm to B, A chooses a random positive integer k and produces the ciphertext Cm consisting of the pair of points:

Cm = {kG, Pm kPB}

A has used B's public key PB. To decrypt the ciphertext, B multiplies the first point in the pair by B's secret key and subtracts the result from the second point:

Pm kPBnB(kG) = Pm k(nBG) nB(kG) = Pm

A has masked the message Pm by adding kPB to it. Nobody but A knows the value of k, so even though PB is a public key, nobody can remove the mask kPB. However, A also includes a "clue," which is enough to remove the mask if one knows the private key nB. For an attacker to recover the message, the attacker would have to compute k given G and kG, which is assumed hard .


Introduction:-To protect the network from various security threats, the security mechanism and security services are required. First, let us examine some related terms.

Vulnerability: An aspect of the system that permits attackers to mount a successful attack. Sometimes also called a "security hole”.Weakness a potential vulnerability, whose risk is not clear. Sometimes several weaknesses might combine to yield a full-fledged vulnerability.

• Threat: a circumstance or scenario with the potential to exploit vulnerability, and cause harm to a system.

• Attack: A deliberate attempt to breach system security. Attacks are usually classified into two types:-

(1)Passive attack refers to attack that does not result in a change to the system, and attempts to break the system solely based upon observed data.

(2) Active attack on another hand involves modifying, replaying, inserting, deleting, or blocking data.

• Security Mechanism: a mechanism that is designed to detect, prevent, or recover from a security attack.

• Security Service: It makes use of security mechanisms to counter security attacks.

• Authentication: the assurance that the communicating entity is the one that it claims to be. In particular,
-Peer Entity Authentication is used in connection-oriented communication to provide assurance on the identity of the entities connected.
-Data Origin Authentication is used in connectionless communication to provide assurance on the identity of the source of the received data block.

• Access Control: the prevention of unauthorized use of a resource.
• Data confidentiality: the protection of data from unauthorized disclosure. It has four specific services:
-Connection Confidentiality: the protection of all user data on a connection.
-Connectionless Confidentiality: the protection of all user data in a single data block.
-Selective-Field Confidentiality: the protection of selected fields within user data on a connection or in a single data block.
-Traffic-flow Confidentiality: the protection of the traffic flow pattern.

• Data integrity: the assurance that data received are the same as send by an authorized entity. It has five specific services:

-Connection Integrity with Recovery: provides detection and recovery from any integrity violation(modification, insertion, deletion, relay) against any user data within an entire data sequence in connection-oriented communication.
-Connection Integrity with Recovery: only detection of integrity violation in connection-oriented communication.
-Selective-Field Connection Integrity: provides for the integrity of selected fields within the user data of a data block transferred over a connection, and determines whether the selected fields have been modified, inserted, deleted, or replayed.
-Connectionless Integrity: provides for the integrity of a single data block, and detects data modification. A limited form of replay detection may be also provided.
-Selective-Field Connectionless Integrity: provides for the integrity of selected fields within a single data block, and determine whether the selected field is modified.

• Nonrepudiation: provides protection against denial by one of the entities involved in a communication of having participated in all or part of the communication. In particular,
-Nonrepudiation of origin proofs that the message was sent by the specified party.
-Nonrepudiation of destination proofs that the message was received by the received party. 


Introduction:-. A message is to be transferred from one party to another across some sort of internet. The two parties, who are the principals in this transaction, must cooperate for the exchange to take place. A logical information channel is established by defining a route through the internet from source to destination and by the cooperative use of communication protocols (e.g., TCP/IP) by the two principals

Model for Network Security

Security aspects come into play when it is necessary or desirable to protect the information transmission from an opponent who may present a threat to confidentiality, authenticity, and so on. All the techniques for providing security have two components:

  • A security-related transformation on the information to be sent. Examples include the encryption of the message, which scrambles the message so that it is unreadable by the opponent, and the addition of a code based on the contents of the message, which can be used to verify the identity of the sender.
  • Some secret information shared by the two principals and, it is hoped, unknown to the opponent. An example is an encryption key used in conjunction with the transformation to scramble the message before transmission and unscramble it on reception.

A trusted third party may be needed to achieve secure transmission. For example, a third party may be responsible for distributing the secret information to the two principals while keeping it from any opponent. Or a third party may be needed to arbitrate disputes between the two principals concerning the authenticity of a message transmission.

This general model shows that there are four basic tasks in designing a particular security service:

1.   Design an algorithm for performing the security-related transformation. The algorithm should be such that an opponent cannot defeat its purpose.

2.    Generate the secret information to be used with the algorithm.

3.    Develop methods for the distribution and sharing of the secret information.

4.   Specify a protocol to be used by the two principals that makes use of the security algorithm and the secret information to achieve a particular security service


Introduction:-Cryptography is probably the most important aspect of communications security and is becoming increasingly important as a basic building block for computer security.

The increased use of computer and communications systems by industry has increased the risk of theft of proprietary information. Although these threats may require a variety of countermeasures, encryption is a primary method of protecting valuable electronic information.

By far the most important automated tool for network and communications security is encryption. Two forms of encryption are in common use: conventional, or symmetric, encryption and public-key, or asymmetric, encryption. Part one provides a survey of the basic principles of symmetric encryption, looks at widely used algorithms, and discusses applications of symmetric cryptography.

Symmetric Cipher Model

A symmetric encryption scheme has five ingredients -

  • Plaintext: This is the original message or data that is fed into the algorithm as input.
  • Encryption algorithm: The encryption algorithm performs various substitutions and transformations on the plaintext.
  • Secret key: The secret key is also input to the encryption algorithm. The key is a value independent of the plaintext and of the algorithm. The algorithm will produce a different output depending on the specific key being used at the time. The exact substitutions and transformations performed by the algorithm depend on the key.
  • Cipher text:This is the scrambled message produced as output. It depends on the plaintext and the secret key. For a given message, two different keys will produce two different cipher texts. The cipher text is an apparently random stream of data and, as it stands, is unintelligible.

                                                          Simplified Conventional model

Decryption algorithm: This is essentially the encryption algorithm run in reverse. It takes the cipher text and the secret key and produces the original plaintext.

There are two requirements for secure use of conventional encryption:

  • We need a strong encryption algorithm. At a minimum, we would like the algorithm to be such that an opponent who knows the algorithm and has access to one or more cipher texts would be unable to decipher the ciphertext or figure out the key. This requirement is usually stated in a stronger form: The opponent should be unable to decrypt ciphertext or discover the key even if he or she is in possession of a number of cipher texts together with the plaintext that produced each ciphertext.
  • Sender and receiver must have obtained copies of the secret key in a secure fashion and must keep the key secure. If someone can discover the key and knows the algorithm, all communication using this key is readable.

We assume that it is impractical to decrypt a message on the basis of the ciphertext plus knowledge of the encryption/decryption algorithm. In other words, we do not need to keep the algorithm secret; we need to keep only the key secret. This feature of symmetric encryption is what makes it feasible for widespread use. 

Introduction:-The two basic building blocks of all encryption techniques are substitution and transposition. A substitution technique is one in which the letters of plaintext are replaced by other letters or by numbers or symbols. If the plaintext is viewed as a sequence of bits, then substitution involves replacing plaintext bit patterns with ciphertext bit patterns.

Caesar Cipher-The earliest known use of a substitution cipher, and the simplest, was by Julius Caesar. The Caesar cipher involves replacing each letter of the alphabet with the letter standing three places furtherdown the alphabet.

For example,

Plain:  meet me after the toga party
Cipher: PHHW PH DIWHU WKH WRJD SDUWB
Note that the alphabet is wrapped around, so that the letter following Z is A
1. Monoalphabetic Ciphers-With only 25 possible keys, the Caesar cipher is far from secure. A dramatic increase in the key  space can be achieved by allowing  
an  arbitrary  substitution. Recall the assignment for the Caesar cipher:
Plain:  a b c d e f g h i j k l m n o p q r s t u v w x y z
cipher: D E F G H I J K L M N O P Q R S T U V W X Y Z A B C

If, instead, the "cipher" line can be any permutation of the 26 alphabetic characters, then there are 26! or greater than 4 x 1026 possible keys. Such an approach is referred to as a monoalphabetic substitution cipher, because a singlecipher alphabet (mapping from plain alphabet to cipher alphabet) is used per message.

2. Playfair Cipher-The best-known multiple-letter encryption cipher is the Playfair, which treats diagrams in the plaintext as single units and translates these units into ciphertext diagrams.

The Playfair algorithm is based on the use of a 5 x 5 matrix of letters constructed using a keyword.

Example -

In this case, the keyword is monarchy. The matrix is constructed by filling in the letters of the keyword (minus duplicates) from left to right and from top to bottom, and then filling in the remainder of the matrix with the remaining letters in alphabetic order. The letters I and J count as one letter. Plaintext is encrypted two letters at a time, according to the following rules:-

1. Repeating plaintext letters that are in the same pair are separated with a filler letter, such as x, so that balloon would be treated as ba lx lo on.

2. Two plaintext letters that fall in the same row of the matrix are each replaced by the letter to the right, with the first element of the row circularly following the last. For example, ar is encrypted as RM.

3. Two plaintext letters that fall in the same column are each replaced by the letter beneath, with the top element of the column circularly following the last. For example, mu is encrypted as CM.

4. Otherwise, each plaintext letter in a pair is replaced by the letter that lies in its own row and the column occupied by the other plaintext letter. Thus, hs become BP and ea becomes IM (or JM, as the enciphered wishes).

3. Polyalphabetic Ciphers-Another way to improve on the simple monoalphabetic technique is to use different monoalphabetic substitutions as one proceeds through the plaintext message. The general name for this approach is polyalphabetic substitution cipher. All these techniques have the following features in common:

1.       A set of related monoalphabetic substitution rules is used.

2.       A key determines which particular rule is chosen for a given transformation.

The best known, and one of the simplest, such algorithm is referred to as the Vigenère cipher. In this scheme, the set of related monoalphabetic substitution rules consists of the 26 Caesar ciphers, with shifts of 0 through 25. Each cipher is denoted by a key letter, which is the ciphertext letter that substitutes for the plaintext letter a. Thus, a Caesar cipher with a shift of 3 is denoted by the key value d.

Each of the 26 ciphers is laid out horizontally, with the key letter for each cipher to its left. A normal alphabet for the plaintext runs across the top. The process of encryption is simple: Given a key letter x and a plaintext letter y, the ciphertext letter is at the intersection of the row labeled x and the column labeled y; in this case the ciphertext is V.

To encrypt a message, a key is needed that is as long as the message. Usually, the key is a repeating keyword. For example, if the keyword is deceptive, the message "we are discovered save yourself" is encrypted as follows:

Key:             deceptivedeceptivedeceptive

Plaintext:       wearediscoveredsaveyourself

Ciphertext:      ZICVTWQNGRZGVTWAVZHCQYGLMG

Decryption is equally simple. The key letter again identifies the row. The position of the ciphertext letter in that row determines the column, and the plaintext letter is at the top of that column. 


Introduction:-A very different kind of mapping is achieved by performing some sort of permutation on the plaintext letters. This technique is referred to as a transposition cipher.

The simplest such cipher is the rail fence technique, in which the plaintext is written down as a sequence of diagonals and then read off as a sequence of rows. For example, to encipher the message "meet me after the toga party" with a rail fence of depth 2, we write the following:

The encrypted message is

MEMATRHTGPRYETEFETEOAAT

This sort of thing would be trivial to cryptanalyze. A more complex scheme is to write the message in a rectangle, row by row, and read the message off, column by column, but permute the order of the columns. The order of the columns then becomes the key to the algorithm. For example,

A pure transposition cipher is easily recognized because it has the same letter frequencies as the original plaintext. For the type of columnar transposition just shown, cryptanalysis is fairly straightforward and involves laying out the ciphertext in a matrix and playing around with column positions. Diagram and trigram frequency tables can be useful.The transposition cipher can be made significantly more secure by performing more than one stage of transposition. The result is a more complex permutation that is not easily reconstructed. Thus, if the foregoing message is reencrypted using the same algorithm,

To visualize the result of this double transposition, designate the letters in the original plaintext message by the numbers designating their position. Thus, with 28 letters in the message, the original sequence of letters is

01 02 03 04 05 06 07 08 09 10 11 12 13 14

15 16 17 18 19 20 21 22 23 24 25 26 27 28

After the first transposition we have

03 10 17 24 04 11 18 25 02 09 16 23 01 08

15 22 05 12 19 26 06 13 20 27 07 14 21 28

Which has a somewhat regular structure? But after the second transposition, we have

17 09 05 27 24 16 12 07 10 02 22 20 03 25 

15 13 04 23 19 14 11 01 26 21 18 08 06 


Introduction:-The rotor machine consists of a set of independently rotating cylinders through which electrical pulses can flow. Each cylinder has 26 input pins and 26 output pins, with internal wiring that connects each input pin to a unique output pin.

If we associate each input and output pin with a letter of the alphabet, then a single cylinder defines a monoalphabetic substitution. For example if an operator depresses the key for the letter A, an electric signal is applied to the first pin of the first cylinder and flows through the internal connection to the twenty-fifth output pin.

Consider a machine with a single cylinder. After each input key is depressed, the cylinder rotates one position, so that the internal connections are shifted accordingly. Thus, a different monoalphabetic substitution cipher is defined. After 26 letters of plaintext, the cylinder would be back to the initial position. Thus, we have a polyalphabetic substitution algorithm with a period of 26.

A single-cylinder system is trivial and does not present a formidable cryptanalytic task. The power of the rotor machine is in the use of multiple cylinders, in which the output pins of one cylinder are connected to the input pins of the next. The left half of the figure shows a position in which the input from the operator to the first pin (plaintext letter a) is routed through the three cylinders to appear at the output of the second pin (ciphertext letter B).

With multiple cylinders, the one closest to the operator input rotates one pin position with each keystroke. The right half shows the system's configuration after a single keystroke. For every complete rotation of the inner cylinder, the middle cylinder rotates one pin position. Finally, for every complete rotation of the middle cylinder, the outer cylinder rotates one pin position. The result is that there is 26 x 26 x 26 = 17,576 different substitution alphabets used before the system repeats. 


Introduction:-A plaintext message may be hidden in one of two ways. The methods of steganography conceal the existence of the message, whereas the methods of cryptography render the message unintelligible to outsiders by various transformations of the text.

A simple form of steganography, but one that is time-consuming to construct, is one in which an arrangement of words or letters within an apparently innocuous text spells out the real message. For example, the sequence of first letters of each word of the overall message spells out the hidden message.

 An example in which a subset of the words of the overall message is used to convey the hidden message.

 

Various other techniques have been used historically; some examples are the following:

  • Character marking:Selected letters of printed or typewritten text are overwritten in pencil. The marks are ordinarily not visible unless the paper is held at an angle to bright light.
  • Invisible ink:A number of substances can be used for writing but leave no visible trace until heat or some chemical is applied to the paper.
  • Pin punctures:Small pin punctures on selected letters are ordinarily not visible unless the paper is held up in front of a light.
  • Typewriter correction ribbon:Used between lines typed with a black ribbon, the results of typing with the correction tape are visible only under a strong light.

Steganography has a number of drawbacks when compared to encryption. It requires a lot of overhead to hide a relatively few bits of information. Thus, a message can be first encrypted and then hidden using steganography. 


Introduction:-Most symmetric block encryption algorithms in current use are based on a structure referred to as a Feistel block cipher. For that reason, it is important to examine the design principles of the Feistel cipher.

A comparison of stream ciphers and block ciphers

A  stream cipher one that encrypts a digital data stream one bit or one byte at a time. Examples of classical stream ciphers are the auto keyed Vigenère cipher and the Vernam cipher.

A is one in which a block of plaintext is treated as a whole and used to produce a ciphertext block of equal length. Typically, a block size of 64 or 128 bits is used.

Feistel Cipher Structure

A block cipher operates on a plaintext block of n bits to produce a ciphertext block of n bits. There are 2n possible different plaintext blocks and, for the encryption to be reversible (i.e., for decryption to be possible), each must produce a unique ciphertext block. Such a transformation is called reversible, or nonsingular. The following examples illustrate nonsingular and singular transformation for n = 2.

I

In the latter case, a ciphertext of 01 could have been produced by one of two plaintext blocks. So if we limit ourselves to reversible mappings, the number of different transformations is 2n.

Feistel Cipher

Feistel proposed that we can approximate the ideal block cipher by utilizing the concept of a product cipher, which is the execution of two or more simple ciphers in sequence in such a way that the final result or product is cryptographically stronger than any of the component ciphers. The essence of the approach is to develop a block cipher with a key length of k bits and a block length of n bits, allowing a total of 2k possible transformations, rather than the 2n! Transformations available with the ideal block cipher.

All rounds have the same structure. A substitution is performed on the left half of the data. This is done by applying a round function F to the right half of the data and then taking the exclusive-OR of the output of that function and the left half of the data. The round function has the same general structure for each round but is parameterized by the round sub key Ki. Following this substitution, a permutation is performed that consists of the interchange of the two halves of the data. This structure is a particular form of the substitution-permutation network (SPN) proposed by Shannon.

The exact realization of a Feistel network depends on the choice of the following parameters and design features:

  • Block size:Larger block sizes mean greater security (all other things being equal) but reduced encryption/decryption speed for a given algorithm. The greater security is achieved by greater diffusion Traditionally; a block size of 64 bits has been considered a reasonable tradeoff and was nearly universal in block cipher design. However, the new AES uses a 128-bit block size.
  • Key size:Larger key size means greater security but may decrease encryption/decryption speed. The greater security is achieved by greater resistance to brute-force attacks and greater confusion. Key sizes of 64 bits or less are now widely considered being inadequate and 128 bits has become a common size.
  • Number of rounds:The essence of the Feistel cipher is that a single round offers inadequate security but that multiple rounds offer increasing security. A typical size is 16 rounds.
  • Sub key generation algorithm:Greater complexity in this algorithm should lead to greater difficulty of cryptanalysis.
  • Round function:Again, greater complexity generally means greater resistance to cryptanalysis

There are two other considerations in the design of a Feistel cipher:

·         Fast software encryption/decryption:In many cases, encryption is embedded in applications or utility functions in such a way as to preclude a hardware implementation. Accordingly, the speed of execution of the algorithm becomes a concern.

Ease of analysis:Although we would like to make our algorithm as difficult as possible to cryptanalyze, there is great benefit in making the algorithm easy to analyze. That is, if the algorithm can be concisely and clearly explained, it is easier to analyze that algorithm for cryptanalytic vulnerabilities and therefore develop a higher level of assurance as to its strength. DES, for example, does not have an easily analyzed functionality.  


Introduction:-The Data Encryption Standard (DES) is a symmetric-key block cipher published by the National Institute of Standards and Technology (NIST).

In 1973, NIST published a request for proposals for a national symmetric-key cryptosystem. A proposal from IBM, a modification of a project called Lucifer, was accepted as DES. DES was published in the Federal Register in March 1975 as a draft of the Federal Information Processing Standard (FIPS).

Encryption and decryption with DES

DES Structure

The encryption process is made of two permutations (P-boxes), which we call initial and final permutations, and sixteen Feistel rounds.

           

General structure of DES                             Initial and final permutation steps in DES                                Initial and final permutation tables

The initial and final permutations are straight P-boxes that are inverses of each other. They have no cryptography significance in DES.

Rounds-DES uses 16 rounds. Each round of DES is a Feistel cipher.

DES Function-The heart of DES is the DES function. The DES function applies a 48-bit key to the rightmost 32 bits to produce a 32-bit output.

The differential cryptanalysis attack is complex. The rationale behind differential cryptanalysis is to observe the behavior of pairs of text blocks evolving along each round of the cipher, instead of observing the evolution of a single text block. Here, we provide a brief overview so that you can get the flavor of the attack.

We begin with a change in notation for DES. Consider the original plaintext block m to consist of two halves m0,m1. Each round of DES maps the right-hand input into the left-hand output and sets the right-hand output to be a function of the left-hand input and the sub key for this round. So, at each round, only one new 32-bit block is created.

The intermediate message halves are related as follows:mi 1 = mi-1f(mi, Ki), i = 1, 2, ..., 16.In differential cryptanalysis, we start with two messages, m and m', with a known XOR difference

Now, suppose that many pairs of inputs to f with the same difference yield the same output difference if the same subkey is used. To put this more precisely, let us say that X may cause Y with probability p, if for a fraction p of the pairs in which the input XOR is X, the output XOR equals Y. We want to suppose that there are a number of values of X that have high probability of causing a particular output difference. Therefore, if we know Dmi-1 and Dmi with high probability, then we know Dmi 1 with high probability. Furthermore, if a number of such differences are determined, it is feasible to determine the sub key used in the function f.

Now, suppose that many pairs of inputs to f with the same difference yield the same output difference if the same subkey is used. To put this more precisely, let us say that X may cause Y with probability p, if for a fraction p of the pairs in which the input XOR is X, the output XOR equals Y. We want to suppose that there are a number of values of X that have high probability of causing a particular output difference. Therefore, if we know Dmi-1 and Dmi with high probability, then we know Dmi 1 with high probability. Furthermore, if a number of such differences are determined, it is feasible to determine the subkey used in the function f.

The overall strategy of differential cryptanalysis is based on these considerations for a single round. The procedure is to begin with two plaintext messages m and m' with a given difference and trace through a probable pattern of differences after each round to yield a probable difference for the ciphertext. Actually, there are two probable patterns of differences for the two 32-bit halves: (Dm17||m16). Next, we submit m and m' for encryption to determine the actual difference under the unknown key and compare the result to the probable difference. If there is a match,

E(K, m) E(K, m') = (Dm17||m16)

then we suspect that all the probable patterns at all the intermediate rounds are correct. With that assumption, we can make some deductions about the key bits. This procedure must be repeated many times to determine all the key bits. 


Introduction:-Using mixers and swappers, we can create the cipher and reverse cipher, each having 16 rounds.

First Approach-To achieve this goal, one approach is to make the last round (round 16) different from the others; it has only a mixer and no swapper.In the first approach, there is no swapper in the last round.

Key Generation :-The round-key generator creates sixteen 48-bit keys out of a 56-bit cipher key.

 

   Parity-bit drop table

                          

Number of bits shifts

Key-compression table

 


Introduction:-DES, as the first important block cipher, has gone through much scrutiny. Among the attempted attacks, three are of interest: brute-force, differential cryptanalysis, and linear cryptanalysis.

·   Brute-Force Attack

·    Differential Cryptanalysis

·     Linear Cryptanalysis

Brute-Force Attack

The weakness of short cipher key in DES. Combining this weakness with the key complement weakness, it is clear that DES can be broken using 255 encryptions.

Differential Cryptanalysis

It has been revealed that the designers of DES already knew about this type of attack and designed S-boxes and chose 16 as the number of rounds to make DES specifically resistant to this type of attack.

Linear Cryptanalysis

Linear cryptanalysis is newer than differential cryptanalysis. DES is more vulnerable to linear cryptanalysis than to differential cryptanalysis. S-boxes are not very resistant to linear cryptanalysis. It has been shown that DES can be broken using 243 pairs of known plaintexts. However, from the practical point of view, finding so many pairs is very unlikely. 


Introduction:-Since its adoption as a federal standard, there have been lingering concerns about the level of security provided by DES. These concerns, by and large, fall into two areas: key size and the nature of the algorithm.

The Use of 56-Bit Keys

With a key length of 56 bits, there are 256 possible keys, which is approximately 7.2 x 1016. Thus, on the face of it, a brute-force attack appears impractical. Assuming that, on average, half the key space has to be searched, a single machine performing one DES encryption per microsecond would take more than a thousand years to break the cipher.

However, the assumption of one encryption per microsecond is overly conservative. As far back as 1977, Diffie and Hellman postulated that the technology existed to build a parallel machine with 1 million encryption devices, each of which could perform one encryption per microsecond. This would bring the average search time down to about 10 hours. The authors estimated that the cost would be about $20 million in 1977 dollars.

It is important to note that there is more to a key-search attack than simply running through all possible keys. Unless known plaintext is provided, the analyst must be able to recognize plaintext as plaintext. If the message is just plain text in English, then the result pops out easily, although the task of recognizing English would have to be automated. If the text message has been compressed before encryption, then recognition is more difficult. And if the message is some more general type of data, such as a numerical file, and this has been compressed, the problem becomes even more difficult to automate. Thus, to supplement the brute-force approach, some degree of knowledge about the expected plaintext is needed, and some means of automatically distinguishing plaintext from garble is also needed. The EFF approach addresses this issue as well and introduces some automated techniques that would be effective in many contexts.Fortunately, there are a number of alternatives to DES, the most important of which are AES and triple DES.

The Nature of the DES Algorithm

Another concern is the possibility that cryptanalysis is possible by exploiting the characteristics of the DES algorithm. The focus of concern has been on the eight substitution tables, or S-boxes, that are used in each iteration. Because the design criteria for these boxes, and indeed for the entire algorithm, were not made public, there is a suspicion that the boxes were constructed in such a way that cryptanalysis is possible for an opponent who knows the weaknesses in the S-boxes. This assertion is tantalizing, and over the years a number of regularities and unexpected behaviors of the S-boxes have been discovered. Despite this, no one has so far succeeded in discovering the supposed fatal weaknesses in the S-boxes.

Timing Attacks

A timing attack exploits the fact that an encryption or decryption algorithm often takes slightly different amounts of time on different inputs. 

Linear Cryptanalysis

This attack is based on finding linear approximations to describe the transformations performed in DES. This method can find a DES key given 243 known plaintexts, as compared to 247 chosen plaintexts for differential cryptanalysis. Although this is a minor improvement, because it may be easier to acquire known plaintext rather than chosen plaintext, it still leaves linear cryptanalysis infeasible as an attack on DES. So far, little work has been done by other groups to validate the linear cryptanalytic approach.

We now give a brief summary of the principle on which linear cryptanalysis is based. For a cipher with n-bit plaintext and cipher text blocks and an m-bit key, let the plaintext block be labeled P[1], ... P[n], the cipher text block C[1], ... C[n], and the key K[1], ... K[m]. Then define :A[i, j, ..., k] = A[i] A[j] ... A[k]

Once a proposed relation is determined, the procedure is to compute the results of the left-hand side of the preceding equation for a large number of plaintext-ciphertext pairs. If the result is 0 more than half the time, assume K [g1, g2... gc] = 0. If it is 1 most of the time, assume K [g1, g2...gc] = 1. This gives us a linear equation on the key bits. Try to get more such relations so that we can solve for the key bits. Because we are dealing with linear equations, the problem can be approached one round of the cipher at a time, with the results combined.

Differential Cryptanalysis

The most publicized results for this approach have been those that have application to DES. Differential cryptanalysis is the first published attack that is capable of breaking DES in less than 255 complexities. The scheme can successfully cryptanalyze DES with an effort on the order of 247 encryptions, requiring 247 chosen plaintexts. Although 247 is certainly significantly less than 255 the need for the adversary to find 247 chosen plaintexts makes this attack of only theoretical interest.

Although differential cryptanalysis is a powerful tool, it does not do very well against DES. The reason, according to a member of the IBM team that designed DES [COPP94, is that differential cryptanalysis was known to the team as early as 1974. The need to strengthen DES against attacks using differential cryptanalysis played a large part in the design of the S-boxes and the permutation P. Differential cryptanalysis of an eight-round LUCIFER algorithm requires only 256 chosen plaintexts, whereas an attack on an eight-round version of DES requires 214 chosen plaintexts. 


Introduction:-The three critical aspects of block cipher design: the number of rounds, design of the function F, and key scheduling.

DES Design Criteria-The criteria used in the design of DES focused on the design of the S-boxes and on the P function that takes the output of the S boxes. The criteria for the S-boxes are as follows:

1.No output bit of any S-box should be too close a linear function of the input bits.

2. Each row of an S-box should include all 16 possible output bit combinations.

3. If two inputs to an S-box differ in exactly one bit, the outputs must differ in at least two bits.

4. If two inputs to an S-box differ in the two middle bits exactly, the outputs must differ in at least two bits.

5. If two inputs to an S-box differ in their first two bits and are identical in their last two bits, the two outputs must not be the same.

6. For any nonzero 6-bit difference between inputs, no more than 8 of the 32 pairs of inputs exhibiting that difference may result in the same output difference.

7. This is a criterion similar to the previous one, but for the case of three S-boxes.

The criteria for the permutation P are as follows:

  1. The four output bits from each S-box at round i are distributed so that two of them affect “middle bits" of round (i 1) and the other two affect end bits. The two middle bits of input to an S-box are not shared with adjacent S-boxes. The end bits are the two left-hand bits and the two right-hand bits, which are shared with adjacent S-boxes.
  2. The four output bits from each S-box affect six different S-boxes on the next round, and no two affect the same S-box.
  3. For two S-boxes j, k, if an output bit from Sj affects a middle bit of Sk on the next round, then an output bit from Sk cannot affect a middle bit of Sj. This implies that for j = k, an output bit from Sj must not affect a middle bit of Sj.

Number of Rounds-The cryptographic strength of a Feistel cipher derives from three aspects of the design: the number of rounds, the function F, and the key schedule algorithm.

The greater the number of rounds, the more difficult it is to perform cryptanalysis, even for a relatively weak F.

The heart of a Feistel block cipher is the function F. In DES, this function relies on the use of S-boxes.

The more nonlinear F, the more difficult any type of cryptanalysis will be. There are several measures of nonlinearity. In rough terms, the more difficult it is to approximate F by a set of linear equations, the more nonlinear F is.

S-Box Design

For larger S-boxes, such as 8 x 32, the best method of selecting the S-box entries :

  • Random:Use some pseudorandom number generation or some table of random digits to generate the entries in the S-boxes. This may lead to boxes with undesirable characteristics for small sizes (e.g., 6 x 4) but should be acceptable for large S-boxes (e.g., 8 x 32).
  • Random with testing:Choose S-box entries randomly, then test the results against various criteria, and throw away those that do not pass.
  • Human-made:This is a more or less manual approach with only simple mathematics to support it. It is apparently the technique used in the DES design. This approach is difficult to carry through for large S-boxes.
  • Math-made:Generate S-boxes according to mathematical principles. By using mathematical construction, S-boxes can be constructed that offer proven security against linear and differential cryptanalysis, together with good diffusion.

Key Schedule Algorithm

A final area of block cipher design, and one that has received less attention than S-box design, is the key schedule algorithm. With any Feistel block cipher, the key is used to generate one subkey for each round. In general, we would like to select subkeys to maximize the difficulty of deducing individual subkeys and the difficulty of working back to the main key. No general principles for this have yet been promulgated. 


Introduction:-Groups, rings, and fields are the fundamental elements of a branch of mathematics known as abstract algebra, or modern algebra. In abstract algebra, we can combine two elements of the set in several ways, to obtain a third element of the set. These operations are subject to specific rules, which define the nature of the set. By convention, the notation for the two principal classes of operations on set elements is usually the same as the notation for addition and multiplication on ordinary numbers. However, in abstract algebra, we are not limited to ordinary arithmetical operations.

Groups-group G, sometimes denoted by {G, ·} is a set of elements with a binary operation, denoted by that associates to each ordered pair (a, b) of elements in G an element (a · b) in G, such that the following axioms are obeyed.

Rings -A ringR, sometimes denoted by {R, , x}, is a set of elements with two binary operations, called addition and multiplication, such that for all a, b, c in R the following axioms are obeyed:

(A1-A5) R is an abelian group with respect to addition; that is, R satisfies axioms A1 through A5. For the case of an additive group, we denote the identity element as 0 and the inverse of a as a.

(M1) Closure under multiplication:

If a and b belong to R, then ab is also in R.

(M2) Associativity of multiplication:

a(bc) = (ab)c for all a, b, c in R.

(M3) Distributive laws:

a(b c) = ab ac for all a, b, c in R.
(a b)c = ac bc for all a, b, c in R.


A ring is a set in which we can do addition, subtraction [ab = a (-b)], and multiplication without leaving the set. A ring is said to be commutative if it satisfies the following additional condition:

(M4) Commutatively of multiplication:

ab = ba for all a, b in R.

(M5) Multiplicative identity:

                There is an element 1 in R such that a1 = 1a = a for all a in R.

 (M6) No zero divisors:

                If a, b in R and ab = 0, then either a = 0 or b = 0.

Fields-A field F, sometimes denoted by {F, , x}, is a set of elements with two binary operations, called addition and multiplication, such that for all a, b, c in F the following axioms are obeyed:(A1M6) F is an integral domain; that is, F satisfies axioms A1 through A5 and M1 through M6.

 (M7) Multiplicative inverse:

For each a in F, except 0, there is an element a-1 in F such that aa-1 = (a-1)a = 1.

Group, Ring, and Field 

 



Introduction:-One of the basic techniques of number theory is the Euclidean algorithm, which is a simple procedure for determining the greatest common divisor of two positive integers.

Greatest Common Divisor:-Recall that nonzero b is defined to be a divisor of a if a = mb for some m, where a, b, and m are integers. We will use the notation gcd(a, b) to mean the greatest common factor of a and b. The positive integer c is said to be the greatest common divisor of a and b if

1. c is a divisor of a and of b;

2. any divisor of a and b is a divisor of c.

An equivalent definition is the following: gcd(a, b) = max[k, such that k|a and k|b].Because we require that the greatest common divisor be positive, gcd(a, b) = gcd(a, b) = gcd(a, b) = gcd(a, b). In general, gcd(a, b) = gcd(|a|, |b|).Also, because all nonzero integers divide 0, we have gcd(a, 0) = |a|.We stated that two integers a and b are relatively prime if their only common positive integer factor is 1. This is equivalent to saying that a and b are relatively prime if gcd(a, b) =1.

Finding the Greatest Common Divisor-The Euclidean algorithm is based on the following theorem: For any nonnegative integer a and any positive integer b,

Let d = gcd(a, b). Then, by the definition of gcd, d|a and d|b. For any positive integer b, a can be expressed in the form


With k, r integers. Therefore, (a mod b) = akb for some integer k. But because d|b, it also divides kb. We also have d|a. Therefore, d|(a mod b). This shows that d is a common divisor of b and (a mod b). Conversely, if d is a common divisor of b and (a mod b), then d|kb and thus d|[kb (a mod b)], which is equivalent to d|a. Thus, the set of common divisors of a and b is equal to the set of common divisors of b and (a mod b). Therefore, the gcd of one pair is the same as the gcd of the other pair, proving the theorem.

To determine the greatest common divisor :-

The Euclidean algorithm makes repeated use of equations above to determine the greatest common divisor, as follows. The algorithm assumes a > b > 0. It is acceptable to restrict the algorithm to positive integers because gcd(ab) = gcd(|a|, |b|).

The algorithm has the following progression:


The differential cryptanalysis attack is complex. The rationale behind differential cryptanalysis is to observe the behavior of pairs of text blocks evolving along each round of the cipher, instead of observing the evolution of a single text block. Here, we provide a brief overview so that you can get the flavor of the attack.

We begin with a change in notation for DES. Consider the original plaintext block m to consist of two halves m0,m1. Each round of DES maps the right-hand input into the left-hand output and sets the right-hand output to be a function of the left-hand input and the sub key for this round. So, at each round, only one new 32-bit block is created.

The intermediate message halves are related as follows:mi 1 = mi-1f(mi, Ki), i = 1, 2, ..., 16.In differential cryptanalysis, we start with two messages, m and m', with a known XOR difference

Now, suppose that many pairs of inputs to f with the same difference yield the same output difference if the same subkey is used. To put this more precisely, let us say that X may cause Y with probability p, if for a fraction p of the pairs in which the input XOR is X, the output XOR equals Y. We want to suppose that there are a number of values of X that have high probability of causing a particular output difference. Therefore, if we know Dmi-1 and Dmi with high probability, then we know Dmi 1 with high probability. Furthermore, if a number of such differences are determined, it is feasible to determine the sub key used in the function f.

Now, suppose that many pairs of inputs to f with the same difference yield the same output difference if the same subkey is used. To put this more precisely, let us say that X may cause Y with probability p, if for a fraction p of the pairs in which the input XOR is X, the output XOR equals Y. We want to suppose that there are a number of values of X that have high probability of causing a particular output difference. Therefore, if we know Dmi-1 and Dmi with high probability, then we know Dmi 1 with high probability. Furthermore, if a number of such differences are determined, it is feasible to determine the subkey used in the function f.

The overall strategy of differential cryptanalysis is based on these considerations for a single round. The procedure is to begin with two plaintext messages m and m' with a given difference and trace through a probable pattern of differences after each round to yield a probable difference for the ciphertext. Actually, there are two probable patterns of differences for the two 32-bit halves: (Dm17||m16). Next, we submit m and m' for encryption to determine the actual difference under the unknown key and compare the result to the probable difference. If there is a match,

E(K, m) E(K, m') = (Dm17||m16)

then we suspect that all the probable patterns at all the intermediate rounds are correct. With that assumption, we can make some deductions about the key bits. This procedure must be repeated many times to determine all the key bits.

 Introduction:-A field is a set that obeys all of the axioms and gave some examples of infinite fields. Finite fields play a crucial role in many cryptographic algorithms. The order of a finite field (number of elements in the field) must be a power of a prime pn, where n is a positive integer.A prime number is an integer whose only positive integer factors are itself and 1. That is, the only positive integers that are divisors of p are p and 1.

The finite field of order pn is generally written GF (pn) stands for Galois field, in honor of the mathematician who first studied finite fields. Two special cases are of interest for our purposes. For n = 1, we have the finite field GF (p); this finite field has a different structure than that for finite fields with n> 1.

Finite Fields of Order p-For a given prime, p, the finite field of order p, GF(p) is defined as the set Zp of integers {0, 1,..., p 1}, together with the arithmetic operations modulo p. The set Zn of integers {0, 1... n 1}, together with the arithmetic operations modulo n, is a commutative ring. Any integer in Zn has a multiplicative inverse if and only if that integer is relatively prime to n.If n is prime, then all of the nonzero integers in Zn are relatively prime to n, and therefore there exists a multiplicative inverse for all of the nonzero integers in Zn. Thus, we can add the following properties to Zp:

 Because w is relatively prime to p, if we multiply all the elements of Zp by w, the resulting residues are all of the elements of Zp permuted. Thus, exactly one of the residues has the value 1. Therefore, there is some integer Zp in that, when multiplied by w, yields the residue 1. That integer is the multiplicative inverse of w, designated w1. Therefore, Zp is in fact a finite field.

 

 

Multiplying both sides of Equation by the multiplicative inverse of a, we have:

Table has is a field of order 7 using modular arithmetic modulo 7. As can be seen, it satisfies all of the properties required of a field.

Finding the Multiplicative Inverse in GF (p)-It is easy to find the multiplicative inverse of an element in GF (p) for small values of p. we can simply construct a multiplication table and the desired result can be read directly. However, for large values of p, this approach is not practical.
If gcd (m, b) = 1, then b has a multiplicative inverse modulo m. That is, for positive integer b

 Throughout the computation, the following relationships hold:

To see that this algorithm correctly returns gcd(m, b),we equate A and B in the Euclidean algorithm with A3 and B3 in the extended Euclidean algorithm, then the treatment of the two variables is identical.  

 

 

Introduction:- The Rijndael proposal for AES defined a cipher in which the block length and the key length can be independently specified to be 128, 192, or 256 bits. The AES specification uses the same three key size alternatives but limits the block length to 128 bits. A number of AES parameters depend on the key length,here we assume a key length of 128 bits, which is likely to be the one most commonly implemented.

Rijndael was designed to have the following characteristics:

  • Resistance against all known attacks
  • Speed and code compactness on a wide range of platforms
  • Design simplicity

Figure shows the overall structure of AES. The input to the encryption and decryption algorithms is a single 128-bit block. In FIPS PUB 197, this block is depicted as a square matrix of bytes. This block is copied into the State array, which is modified at each stage of encryption or decryption. After the final stage, State is copied to an output matrix. Similarly, the 128-bit key is depicted as a square matrix of bytes. This key is then expanded into an array of key schedule words; each word is four bytes and the total key schedule is 44 words for the 128-bit key. Note that the ordering of bytes within a matrix is by column. So, for example, the first four bytes of a 128-bit plaintext input to the encryption cipher occupy the first column of the in matrix, the second four bytes occupy the second column, and so on. Similarly, the first four bytes of the expanded key, which form a word, occupy the first column of the w matrix.

                                                  Structure of AES

                                                         Data structure of AES

 

Introduction:-The forward substitute byte transformation, called SubBytes, is a simple table lookup. AES defines a 16 x 16 matrix of byte values, called an S-box that contains a permutation of all possible 256 8-bit values. Each individual byte of State is mapped into a new byte in the following way: The leftmost 4 bits of the byte are used as a row value and the rightmost 4 bits are used as a column value. These row and column values serve as indexes into the S-box to select a unique 8-bit output value. For example, the hexadecimal value {95} references row 9, column 5 of the S-box, which contains the value {2A}. Accordingly, the value {95} is mapped into the value {2A}.

                                                             AES S-Boxes                                                                                                       AES Byte-Level Operations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The S-box is constructed in the following fashion:

1. Initialize the S-box with the byte values in ascending sequence row by row. The first row contains {00}, {01}, {02} ... {0F}; the second row contains {10}, {11}, etc.; and so on. Thus, the value of the byte at row x, column y is {xy}.

2. Map each byte in the S-box to its multiplicative inverse in the finite field GF (28); the value {00} is mapped to itself.

3. Consider that each byte in the S-box consists of 8 bits labeled (b7, b6, b5, b4, b3, b2, b1, b0). Apply the following transformation to each bit of each byte in the S-box:

Equation 1 :

Whereci is the ith bit of byte c with the value {63}; that is, (c7c6c5c4c3c2c1c0) = (01100011). The prime (') indicates that the variable is to be updated by the value on the right. The AES standard depicts this transformation in matrix form as follows:

Equation 2:

 

n ordinary matrix multiplication, each element in the product matrix is the sum of products of the elements or one row and one column. In this case, each element in the product matrix is the bitwise XOR of products of elements of one row and one column.

The inverse substitute byte transformation, called InvSubBytes, makes use of the inverse S-box. The input {2A} produces the output {95} and the input {95} to the S-box produces {2A}. The inverse S-box is constructed by applying the inverse of the transformation followed by taking the multiplicative inverse in GF(28). The inverse transformation is:

Where byte d = {05}, or 00000101. We can depict this transformation as follows:

 To see that InvSubBytes is the inverse of Sub Bytes, label the matrices iIn Sub Bytes and InvSubBytes as X and Y, respectively, and the vector versions of constants c and d as C and D, respectively.

For some 8-bit vector B, becomes B' = XB⊕C. We need to show that Y(XB⊕C) D = B

It shows that YX equals the identity matrix, and the YC = D, so that YC⊕D equals the null vector. 

Introduction:-AES has been subjected to more scrutiny than any other encryption algorithm over a longer period of time, and no effective cryptanalytic attack based on the algorithm rather than brute force has been found. Accordingly, there is a high level of confidence that 3DES is very resistant to cryptanalysis. If security were the only consideration, then 3DES would be an appropriate choice for a standardized encryption algorithm for decades to come.

The principal drawback of 3DES is that the algorithm is relatively sluggish in software. The original DES was designed for mid-1970s hardware implementation and does not produce efficient software code. 3DES, which has three times as many rounds as DES, is correspondingly slower. A secondary drawback is that both DES and 3DES use a 64-bit block size. For reasons of both efficiency and security, a larger block size is desirable.

Because of these drawbacks, 3DES is not a reasonable candidate for long-term use. As a replacement, NIST in 1997 issued a call for proposals for a new Advanced Encryption Standard (AES), which should have security strength equal to or better than 3DES and significantly, improved efficiency. In a first round of evaluation, 15 proposed algorithms were accepted. A second round narrowed the field to 5 algorithms. NIST completed its evaluation process and published a final standard (FIPS PUB 197) in November of 2001. NIST selected Rijndael as the proposed AES algorithm. The two researchers who developed and submitted Rijndael for the AES are both cryptographers from Belgium: Dr. Joan Daemen and Dr. Vincent Rijmen.

Ultimately, AES is intended to replace 3DES, but this process will take a number of years. NIST anticipates that 3DES will remain an approved algorithm (for U.S. government use) for the foreseeable future.

AES Evaluation-It is worth examining the criteria used by NIST to evaluate potential candidates. These criteria span the range of concerns for the practical application of modern symmetric block ciphers. In fact, two set of criteria evolved. When NIST issued its original request for candidate algorithm nominations in 1997. The three categories of criteria were as follows:

  • Security: This refers to the effort required to cryptanalyze an algorithm. The emphasis in the evaluation was on the practicality of the attack. Because the minimum key size for AES is 128 bits, brute-force attacks with current and projected technology were considered impractical. Therefore, the emphasis, with respect to this point, is cryptanalysis other than a brute-force attack.
  • Cost: NIST intends AES to be practical in a wide range of applications. Accordingly, AES must have high computational efficiency, so as to be usable in high-speed applications, such as broadband links.

·         Algorithm and implementation characteristics:This category includes a variety of considerations, including flexibility; suitability for a variety of hardware and software implementations; and simplicity, which will make an analysis of security more straightforward. 

Introduction:-The forward shift row transformation, called ShiftRows. The first row of State is not altered. For the second row, a 1-byte circular left shift is performed. For the third row, a 2-byte circular left shift is performed. For the fourth row, a 3-byte circular left shift is performed. The following is an example of ShiftRows-

                                     AES Row and Column Operations

The inverse shift row transformation, called InvShiftRows, performs the circular shifts in the opposite direction for each of the last three rows, with a one-byte circular right shift for the second row, and so on.

Rationale-The shift row transformation is more substantial than it may first appear. This is because the State, as well as the cipher input and ou

tput, is treated as an array of four 4-byte columns. Thus, on encryption, the first 4 bytes of the plaintext are copied to the first column of State, and so on. Further, as will be seen, the round key is applied to State column by column. Thus, a row shift moves an individual byte from one column to another, which is a linear distance of a multiple of 4 bytes. The transformation ensures that the 4 bytes of one column are spread out to four different columns.

MixColumns Transformation

Forward and Inverse Transformations-The forward mix column transformation, called MixColumns, operates on each column individually. Each byte of a column is mapped into a new value that is a function of all four bytes in that column. The transformation can be defined by the following matrix multiplication on State

 

 

 

 

Each element in the product matrix is the sum of products of elements of one row and one column. In this case, the individual additions and multiplications are performed in GF (28).

The MixColumns transformation on a single column j(0≤j≤ 3) of State can be expressed as:-

 


Introduction:-The AES decryption cipher is not identical to the encryption cipher .That is, the sequence of transformations for decryption differs from that for encryption, although the form of the key schedules for encryption and decryption is the same. This has the disadvantage that two separate software or firmware modules are needed for applications that require both encryption and decryption. There is, however, an equivalent version of the decryption algorithm that has the same structure as the encryption algorithm. The equivalent version has the same sequence of transformations as the encryption algorithm (with transformations replaced by their inverses). To achieve this equivalence, a change in key schedule is needed.

Two separate changes are needed to bring the decryption structure in line with the encryption structure. An encryption round has the structure SubBytes, ShiftRows, MixColumns, AddRoundKey. The standard decryption round has the structure InvShiftRows, InvSubBytes, AddRoundKey, InvMixColumns. Thus, the first two stages of the decryption round need to be interchanged, and the second two stages of the decryption round need to be interchanged.





Interchanging InvShiftRows and InvSubBytes-InvShiftRows affects the sequence of bytes in State but does not alter byte contents and does not depend on byte contents to perform its transformation. InvSubBytes affects the contents of bytes in State but does not alter byte sequence and does not depend on byte sequence to perform its transformation. Thus, these two operations commute and can be interchanged. For a given State Si,

InvShiftRows [InvSubBytes (Si)] = InvSubBytes [InvShiftRows (Si)]

Interchanging AddRoundKey and InvMixColumns-The transformations AddRoundKey and InvMixColumns do not alter the sequence of bytes in State. If we view the key as a sequence of words, then both AddRoundKey and InvMixColumns operate on State one column at a time. These two operations are linear with respect to the column input. That is, for a given State Si and a given round key wj:

InvMixColumns (Si⊕wj) = [InvMixColumns (Si)] ⊕[InvMixColumns (wj)]

To see this, suppose that the first column of StateSi is the sequence (y0, y1, y2, y3) and the first column of the round key wj is (k0, k1, k2, k3). Then we need to show that,

This equation is valid by inspection. Thus, we can interchange AddRoundKey and InvMixColumns, provided that we first apply InvMixColumns to the round key. We do not need to apply InvMixColumns to the round key for the input to the first AddRoundKey transformation (preceding the first round) nor to the last AddRoundKey transformation (in round 10). This is because these two AddRoundKey transformations are not interchanged with InvMixColumns to produce the equivalent decryption algorithm. 


Introduction:-The potential vulnerability of DES to a brute-force attack, there has been considerable interest in finding an alternative. One approach is to design a completely new algorithm, of which AES is a prime example. Another alternative, which would preserve the existing investment in software and equipment, is to use multiple encryptions with DES and multiple keys. We begin by examining the simplest example of this second alternative. We then look at the widely accepted triple DES (3DES) approach.

Double DES-The simplest form of multiple encryption has two encryption stages and two keys. Given a plaintext P and two encryption keys K1 and K2, ciphertext C is generated as

Decryption requires that the keys be applied in reverse order:

For DES, this scheme apparently involves a key length of 56 x 2 = 112 bits, of resulting in a dramatic increase in cryptographic strength.

Reduction to a Single Stage-Suppose it were true for DES, for all 56-bit key values, that given any two keys K1 and K2, it would be possible to find a key K3 such that 

If this were the case, then double encryption, and indeed any number of stages of multiple encryption with DES, would be useless because the result would be equivalent to a single encryption with a single 56-bit key. On the other hand, DES defines one mapping for each different key, for a total number of mappings:

256>1017

Therefore, it is reasonable to assume that if DES is used twice with different keys, it will produce one of the many mappings that are not defined by a single application of DES.

Meet-in-the-Middle Attack -the use of double DES results in a mapping that is not equivalent to a single DES encryption. But there is a way to attack this scheme, one that does not depend on any particular property of DES but that will work against any block encryption cipher. It is based on the observation that, if we have

Given a known pair, (P, C), the attack proceeds as follows. First, encrypt P for all 256 possible values of K1 Store these results in a table and then sort the table by the values of X. Next, decrypt C using all 256 possible values of K2. As each decryption is produced, check the result against the table for a match. If a match occurs, then test the two resulting keys against a new known plaintext-ciphertext pair. If the two keys produce the correct ciphertext, accept them as the correct keys. For any given plaintext P, there are 264 possible ciphertext values that could be produced by double DES. Double DES uses, in effect, a 112-bit key, so that there are 2112 possible keys. The result is that a known plaintext attack will succeed against double DES, which has a key size of 112 bits, with an effort on the order of 256, not much more than the 255 required for single DES. 


Introduction:-The meet-in-the-middle attack is to use three stages of encryption with three different keys. This raises the cost of the known-plaintext attack to 2112, which is beyond what is practical now and far into the future. However, it has the drawback of requiring a key length of 56 x 3 = 168 bits, which may be somewhat unwieldy.
As an alternative, Tuchman proposed a triple encryption method that uses only two keys. The function follows an encrypt-decrypt-encrypt (EDE) sequence:

There is no cryptographic significance to the use of decryption for the second stage. Its only advantage is that it allows users of 3DES to decrypt data encrypted by users of the older single DES:

3DES with two keys is a relatively popular alternative to DES and has been adopted for use in the key management standards ANS X9.17 and ISO 8732.

Currently, there are no practical cryptanalytic attacks on 3DES. Coppersmith notes that the cost of a brute-force key search on 3DES is on the order of 2112 (5 x 1033) and estimates that the cost of differential cryptanalysis suffers an exponential growth, compared to single DES, exceeding 1052.It is worth looking at several proposed attacks on 3DES that, although not practical, give a flavor for the types of attacks that have been considered and that could form the basis for more successful future attacks.The first serious proposal came from Merkle and Hellman. Their plan involves finding plaintext values that produce a first intermediate value of A = 0 and then using the meet-in-the-middle attack to determine the two keys. The level of effort is 256, but the technique requires 256 chosen plaintext-ciphertext pairs, a number unlikely to be provided by the holder of the keys.

This method is an improvement over the chosen-plaintext approach but requires more effort. The attack is based on the observation that if we know A and C, then the problem reduces to that of an attack on double DES. Of course, the attacker does not know A, even if P and C are known, as long as the two keys are unknown. However, the attacker can choose a potential value of A and then try to find a known (P, C) pair that produces A. The attack proceeds as follows:

  1. Obtain n (P, C) pairs. This is the known plaintext. Place these in a table sorted on the values of P.
  2. Pick an arbitrary value a forA, and create a second table with entries defined in the following fashion. For each of the 256 possible keys K1 = i, calculate the plaintext value Pi that produces a: Pi = D(i, a)
  3.  For each Pi that matches an entry in Table 1 create an entry in Table 2 consisting of the K1 value and the value of B that is produced for the (P, C) pair from Table 1, assuming that value of K1:B = D(i, C)
    At the end of this step, sort Table 2 on the values of B.
  4. We now have a number of candidate values of K1 in Table 2 and are in a position to search for a value of K2. For each of the 256 possible keys K2 = j, calculate the second intermediate value for our chosen value of a: Bj = D(j, a)
  5. At each step, look up Bj in Table 2 . If there is a match, then the corresponding key i from Table 2 plus this value of j are candidate values for the unknown keys (K1, K2). Why? Because we have found a pair of keys (i, j) that produce a known (P, C) pair.
  6. Test each candidate pair of keys (i, j) on a few other plaintext-ciphertext pairs. If a pair of keys produces the desired ciphertext, the task is complete. If no pair succeeds, repeat from step 1 with a new value of a.

 A basic result from probability theory is that the expected number of draws required to draw one red ball out of a bin containing n red balls and N green balls is (N 1)/(n 1) if the balls are not replaced. So the expected number of values of that must be tried is, for large n. 

Introduction:-The meet-in-the-middle attack is to use three stages of encryption with three different keys. This raises the cost of the known-plaintext attack to 2112, which is beyond what is practical now and far into the future. However, it has the drawback of requiring a key length of 56 x 3 = 168 bits, which may be somewhat unwieldy.
As an alternative, Tuchman proposed a triple encryption method that uses only two keys. The function follows an encrypt-decrypt-encrypt (EDE) sequence:

There is no cryptographic significance to the use of decryption for the second stage. Its only advantage is that it allows users of 3DES to decrypt data encrypted by users of the older single DES:

3DES with two keys is a relatively popular alternative to DES and has been adopted for use in the key management standards ANS X9.17 and ISO 8732.

Currently, there are no practical cryptanalytic attacks on 3DES. Coppersmith notes that the cost of a brute-force key search on 3DES is on the order of 2112 (5 x 1033) and estimates that the cost of differential cryptanalysis suffers an exponential growth, compared to single DES, exceeding 1052.It is worth looking at several proposed attacks on 3DES that, although not practical, give a flavor for the types of attacks that have been considered and that could form the basis for more successful future attacks.The first serious proposal came from Merkle and Hellman. Their plan involves finding plaintext values that produce a first intermediate value of A = 0 and then using the meet-in-the-middle attack to determine the two keys. The level of effort is 256, but the technique requires 256 chosen plaintext-ciphertext pairs, a number unlikely to be provided by the holder of the keys.

This method is an improvement over the chosen-plaintext approach but requires more effort. The attack is based on the observation that if we know A and C, then the problem reduces to that of an attack on double DES. Of course, the attacker does not know A, even if P and C are known, as long as the two keys are unknown. However, the attacker can choose a potential value of A and then try to find a known (P, C) pair that produces A. The attack proceeds as follows:

  1. Obtain n (P, C) pairs. This is the known plaintext. Place these in a table sorted on the values of P.
  2. Pick an arbitrary value a forA, and create a second table with entries defined in the following fashion. For each of the 256 possible keys K1 = i, calculate the plaintext value Pi that produces a: Pi = D(i, a)
  3.  For each Pi that matches an entry in Table 1 create an entry in Table 2 consisting of the K1 value and the value of B that is produced for the (P, C) pair from Table 1, assuming that value of K1:B = D(i, C)
    At the end of this step, sort Table 2 on the values of B.
  4. We now have a number of candidate values of K1 in Table 2 and are in a position to search for a value of K2. For each of the 256 possible keys K2 = j, calculate the second intermediate value for our chosen value of a: Bj = D(j, a)
  5. At each step, look up Bj in Table 2 . If there is a match, then the corresponding key i from Table 2 plus this value of j are candidate values for the unknown keys (K1, K2). Why? Because we have found a pair of keys (i, j) that produce a known (P, C) pair.
  6. Test each candidate pair of keys (i, j) on a few other plaintext-ciphertext pairs. If a pair of keys produces the desired ciphertext, the task is complete. If no pair succeeds, repeat from step 1 with a new value of a.

 A basic result from probability theory is that the expected number of draws required to draw one red ball out of a bin containing n red balls and N green balls is (N 1)/(n 1) if the balls are not replaced. So the expected number of values of that must be tried is, for large n. 

Introduction:-The DES scheme is essentially a block cipher technique that uses b-bit blocks. However, it is possible to convert DES into a stream cipher, using either the cipher feedback (CFB) or the output feedback mode. A stream cipher eliminates the need to pad a message to be an integral number of blocks. It also can operate in real time. Thus, if a character stream is being transmitted, each character can be encrypted and transmitted immediately using a character-oriented stream cipher.

One desirable property of a stream cipher is that the ciphertext be of the same length as the plaintext. Thus, if 8-bit characters are being transmitted, each character should be encrypted to produce a cipher text output of 8 bits. If more than 8 bits are produced, transmission capacity is wasted.

In the figure, it is assumed that the unit of transmission is s bits; a common value is s = 8. As with CBC, the units of plaintext are chained together, so that the ciphertext of any plaintext unit is a function of all the preceding plaintext. In this case, rather than units of b bits, the plaintext is divided into segments of s bits.

First, consider encryption. The input to the encryption function is a b-bit shift register that is initially set to some initialization vector (IV). The leftmost (most significant) s bits of the output of the encryption function are XORed with the first segment of plaintext P1 to produce the first unit of ciphertext C1, which is then transmitted. In addition, the contents of the shift register are shifted left by s bits and C1 is placed in the rightmost (least significant) s bits of the shift register. This process continues until all plaintext units have been encrypted.

For decryption, the same scheme is used, except that the received ciphertext unit is XORed with the output of the encryption function to produce the plaintext unit. Note that it is the encryption function that is used, not the decryption function. This is easily explained. Let Ss(X) be defined as the most significant s bits of X. Then

The same reasoning holds for subsequent steps in the process. 



Introduction:-The output feedback (OFB) mode is similar in structure to that of CFB. As can be seen, it is the output of the encryption function that is fed back to the shift register in OFB, whereas in CFB the ciphertext unit is fed back to the shift register.

One advantage of the OFB method is that bit errors in transmission do not propagate. For example, if a bit error occurs in C1 only the recovered value of is P1 affected; subsequent plaintext units are not corrupted. With CFB, C1 also serves as input to the shift register and therefore causes additional corruption downstream.

The disadvantage of OFB is that it is more vulnerable to a message stream modification attack than is CFB. Consider that complementing a bit in the ciphertext complements the corresponding bit in the recovered plaintext. Thus, controlled changes to the recovered plaintext can be made. This may make it possible for an opponent, by making the necessary changes to the checksum portion of the message as well as to the data portion, to alter the ciphertext in such a way that it is not detected by an error-correcting code.


Introduction:-The counter mode (CTR) has increased recently, with applications to ATM (asynchronous transfer mode) network security and IPSec (IP security), this mode was proposed early.

A counter, equal to the plaintext block size is used. The only requirement stated in SP 800-38A is that the counter value must be different for each plaintext block that is encrypted. Typically, the counter is initialized to some value and then incremented by 1 for each subsequent block (modulo 2b where b is the block size). For encryption, the counter is encrypted and then XORed with the plaintext block to produce the ciphertext block; there is no chaining. For decryption, the same sequence of counter values is used, with each encrypted counter XORed with a ciphertext block to recover the corresponding plaintext block.

Advantages of CTR mode:

  • Hardware efficiency:Unlike the three chaining modes, encryption (or decryption) in CTR mode can be done in parallel on multiple blocks of plaintext or ciphertext. For the chaining modes, the algorithm must complete the computation on one block before beginning on the next block. This limits the maximum throughput of the algorithm to the reciprocal of the time for one execution of block encryption or decryption. In CTR mode, the throughput is only limited by the amount of parallelism that is achieved.
  • Software efficiency:Similarly, because of the opportunities for parallel execution in CTR mode, processors that support parallel features, such as aggressive pipelining, multiple instruction dispatch per clock cycle, a large number of registers, and SIMD instructions, can be effectively utilized.
  • Preprocessing:The execution of the underlying encryption algorithm does not depend on input of the plaintext or ciphertext. Therefore, if sufficient memory is available and security is maintained, preprocessing can be used to prepare the output of the encryption boxes that feed into the XOR functions. When the plaintext or ciphertext input is presented, then the only computation is a series of XORs. Such a strategy greatly enhances throughput.
  • Random access:The ith block of plaintext or ciphertext can be processed in random-access fashion. With the chaining modes, block Ci cannot be computed until the i - 1 prior block are computed. There may be applications in which a ciphertext is stored and it is desired to decrypt just one block; for such applications, the random access feature is attractive.
  • Provable security:It can be shown that CTR is at least as secure as the other.
  • Simplicity:Unlike ECB and CBC modes, CTR mode requires only the implementation of the encryption algorithm and not the decryption algorithm. This matters most when the decryption algorithm differs substantially from the encryption algorithm, as it does for AES. In addition, the decryption key scheduling need not be implemented.

 

 Introduction:-A typical stream cipher encrypts plaintext one byte at a time, although a stream cipher may be designed to operate on one bit at a time or on units larger than a byte at a time. In this structure a key is input to a pseudorandom bit generator that produces a stream of 8-bit numbers that are apparently random. For now, we simply say that a pseudorandom stream is one that is unpredictable without knowledge of the input key. The output of the generator, called a key stream, is combined one byte at a time with the plaintext stream using the bitwise exclusive-OR (XOR) operation.


For example, if the next byte generated by the generator is 01101100 and the next plaintext byte is 11001100, then the resulting cipher text byte is

Decryption requires the use of the same pseudorandom sequence:

The stream cipher is similar to the one-time pad. The difference is that a one-time pad uses a genuine random number stream, whereas a stream cipher uses a pseudorandom number stream.

Lists the following important design considerations for a stream cipher:

  1. The encryption sequence should have a large period. A pseudorandom number generator uses a function that produces a deterministic stream of bits that eventually repeats. The longer the period of repeat the more difficult it will be to do cryptanalysis. This is essentially the same consideration that was discussed with reference to the Vigenère cipher, namely that the longer the keyword the more difficult the cryptanalysis.
     
  2. The keystream should approximate the properties of a true random number stream as close as possible. For example, there should be an approximately equal number of 1s and 0s. If the keystream is treated as a stream of bytes, then all of the 256 possible byte values should appear approximately equally often. The more random-appearing the keystream is, the more randomized the ciphertext is, making cryptanalysis more difficult.
     
  3. The output of the pseudorandom number generator is conditioned on the value of the input key. To guard against brute-force attacks, the key needs to be sufficiently long. The same considerations as apply for block ciphers are valid here. Thus, with current technology, a key length of at least 128 bits is desirable.

With a properly designed pseudorandom number generator, a stream cipher can be as secure as block cipher of comparable key length. The primary advantage of a stream cipher is that stream ciphers are almost always faster and use far less code than do block ciphers. The advantage of a block cipher is that you can reuse keys. If two plaintexts are encrypted with the same key using a stream cipher, then cryptanalysis is often quite simple. If the two ciphertext streams are XORed together, the result is the XOR of the original plaintexts. If the plaintexts are text strings, credit card numbers, or other byte streams with known properties, then cryptanalysis may be successful. 

Introduction:-RC4 is a stream cipher designed in 1987 by Ron Rivest for RSA Security. It is a variable key-size stream cipher with byte-oriented operations. The algorithm is based on the use of a random permutation. Eight to sixteen machine operations are required per output byte, and the cipher can be expected to run very quickly in software. RC4 is used in the SSL/TLS ) standards that have been defined for communication between Web browsers and servers. It is also used in the WEP protocol and the newer WiFi Protected Access (WPA) protocol that are part of the IEEE 802.11 wireless LAN standard. RC4 was kept as a trade secret by RSA Security.

The RC4 algorithm is remarkably simply and quite easy to explain. A variable-length key of from 1 to 256 bytes (8 to 2048 bits) is used to initialize a 256-byte state vector S, with elements S[0], S[1],..., S[255]. At all times, S contains a permutation of all 8-bit numbers from 0 through 255. For encryption and decryption, a byte k is generated from S by selecting one of the 255 entries in a systematic fashion. As each value of k is generated, the entries in S are once again permuted.

Initialization of S:-The entries of S are set equal to the values from 0 through 255 in ascending order; that is; S[0] = 0, S[1] = 1,..., S[255] = 255. A temporary vector, T, is also created. If the length of the key K is 256 bytes, then K is transferred to T. Otherwise, for a key of length keylen bytes, the first keylen elements of T are copied from K and then K is repeated as many times as necessary to fill out T. These preliminary operations can be summarized as follows:

Next we use T to produce the initial permutation of S. This involves starting with S[0] and going through to S[255], and, for each S[i], swapping S[i] with another byte in S according to a scheme dictated by T[i]:

Because the only operation on S is a swap, the only effect is a permutation. S still contains all the numbers from 0 through 255.

Stream Generation:-Once the S vector is initialized, the input key is no longer used. Stream generation involves cycling through all the elements of S[i], and, for each S[i], swapping S[i] with another byte in S according to a scheme dictated by the current configuration of S. After S[255] is reached, the process continues, starting over again at S[0]:

To encrypt, XOR the value k with the next byte of plaintext. To decrypt, XOR the value k with the next byte of cipher text.

Strength of RC4:-. The authors demonstrate that the WEP protocol, intended to provide confidentiality on 802.11 wireless LAN networks, is vulnerable to a particular attack approach. In essence, the problem is not with RC4 itself but the way in which keys are generated for use as input to RC4. This particular problem does not appear to be relevant to other applications using RC4 and can be remedied in WEP by changing the way in which keys are generated. This problem points out the difficulty in designing a secure system that involves both cryptographic functions and protocols that make use of them. 


Introduction:-Random numbers play an important role in the use of encryption for various network security applications.

The Use of Random Numbers -A number of network security algorithms based on cryptography make use of random numbers. For example,

  • In both key distribution scenarios, nonce is used for handshaking to prevent replay attacks. The use of random numbers for the nonce frustrates opponents' efforts to determine or guess the nonce.
  • Session key generation, whether done by a key distribution center or by one of the principals.
  • Generation of keys for the RSA public-key encryption algorithm.

These applications give rise to two distinct and not necessarily compatible requirements for a sequence of random numbers:-randomness and unpredictability.

Randomness

Traditionally, the concern in the generation of a sequence of allegedly random numbers has been that the sequence of numbers be random in some well-defined statistical sense. The following two criteria are used to validate that a sequence of numbers is random:

  • Uniform distribution: The distribution of numbers in the sequence should be uniform; that is, the frequency of occurrence of each of the numbers should be approximately the same.
  • Independence: No one value in the sequence can be inferred from the others.

Although there are well-defined tests for determining that a sequence of numbers matches a particular distribution, such as the uniform distribution, there is no such test to "prove" independence. Rather, a number of tests can be applied to demonstrate if a sequence does not exhibit independence. The general strategy is to apply a number of such tests until the confidence that independence exists is sufficiently strong.

The use of a sequence of numbers that appear statistically random often occurs in the design of algorithms related to cryptography. In general, it is difficult to determine if a given large number N is prime. A brute-force approach would be to divide N by every odd integer less than. If N is on the order, say, of 10150, a not uncommon occurrence in public-key cryptography, such a brute-force approach is beyond the reach of human analysts and their computers. However, a number of effective algorithms exist that test the primality of a number by using a sequence of randomly chosen integers as input to relatively simple computations. If the sequence is sufficiently long, the primality of a number can be determined with near certainty. This type of approach, known as randomization, crops up frequently in the design of algorithms.

Unpredictability

In applications such as reciprocal authentication and session key generation, the requirement is not so much that the sequence of numbers be statistically random but that the successive members of the sequence are unpredictable. With "true" random sequences, each number is statistically independent of other numbers in the sequence and therefore unpredictable. However, true random numbers are seldom used; rather, sequences of numbers that appear to be random are generated by some algorithm.Care must be taken that an opponent not be able to predict future elements of the sequence on the basis of earlier elements. 

Introduction:-Cryptographic applications typically make use of algorithmic techniques for random number generation. These algorithms are deterministic and therefore produce sequences of numbers that are not statistically random. However, if the algorithm is good, the resulting sequences will pass many reasonable tests of randomness. Such numbers are referred to as pseudorandom numbers.

It may be somewhat uneasy about the concept of using numbers generated by a deterministic algorithm as if they were random numbers. Despite what might be called philosophical objections to such a practice, it generally works.

For practical purposes we are forced to accept the awkward concept of "relatively random" meaning that with regard to the proposed use we can see no reason why they will not perform as if they were random (as the theory usually requires). This is highly subjective and is not very palatable to purists, but it is what statisticians regularly appeal to when they take "a random sample “they hope that any results they use will have approximately the same properties as a complete counting of the whole sample space that occurs in their theory. 

Introduction:-The most widely used technique for pseudorandom number generation is an algorithm first proposed by Lehmer, which is known as the linear congruential method. The algorithm is parameterized with four numbers, as follows:

The sequence of random numbers {Xn} is obtained via the following iterative equation:

If m, a, c, and X0 are integers, then this technique will produce a sequence of integers with each integer in the range 0 ≤Xn<m.

The selection of values for a, c, and m is critical in developing a good random number generator. For example, consider a, = c = 1. The sequence produced is obviously not satisfactory. Now consider the values a = 7, c = 0, m = 32, and X0 = 1. This generates the sequence {7, 17, 23, 1, 7, etc.}, which is also clearly unsatisfactory. Of the 32 possible values, only 4 are used; thus, the sequence is said to have a period of 4. If, instead, we change the value of a to 5, then the sequence is {5, 25, 29, 17, 21, 9, 13, 1, 5, etc.}, which increases the period to 8.

We will take m like very large, so that there is the potential for producing a long series of distinct random numbers. A common criterion is that mbe nearly equal to the maximum representable nonnegative integer for a given computer. Thus, a value of m near to or equal to 231 is typically chosen.

Three tests to be used in evaluating a random number generator:

T1: The function should be a full-period generating function. That is, the function should generate all the numbers between 0 and m before repeating.

T2: The generated sequence should appear random. Because it is generated deterministically, the sequence is not random. There is a variety of statistical tests that can be used to assess the degree to which a sequence exhibits randomness.

T3: The function should implement efficiently with 32-bit arithmetic.

With appropriate values of a, c, and m, these three tests can be passed. With respect to T1 it can be shown that if m is prime and c = 0, then for certain values of a, the period of the generating function is m 1, with only the value 0 missing. For 32-bit arithmetic, a convenient prime value of m is 231 1. Thus, the generating function becomes

The strength of the linear congruential algorithm is that if the multiplier and modulus are properly chosen, the resulting sequence of numbers will be statistically indistinguishable from a sequence drawn at random.

 



Introduction:-A popular approach to generating secure pseudorandom number is known as the Blum, Blum,Shub (BBS) generator, named for its developers. It has perhaps the strongest public proof of its cryptographic strength. The procedure is as follows. First, choose two large prime numbers, p and q, that both have a remainder of 3 when divided by 4.
That is,
p ≡q ≡3 (mod 4)

This notation simply means that (p mod 4) = (q mod 4) = 3. Let n = p x q. Next, choose a random number s, such that s is relatively prime to n; this is equivalent to saying that neither p nor q is a factor of s. Then the BBS generator produces a sequence of bits Bi according to the following algorithm:

 X0 = s2 mod n
for i = 1 to
 Xi = (Xi1)2 mod n
Bi = Xi mod 2

Example Operation of BBS Generator

The BBS is referred to as a cryptographically secure pseudorandom bit generator. A CSPRBG is defined as one that passes the next-bit test, which, in turn, is defined as follows: A pseudorandom bit generator is said to pass the next-bit test if there is not a polynomial-time algorithm that, on input of the first k bits of an output sequence, can predict the (k 1)st bit with probability significantly greater than 1/2. In other words, given the first k bits of the sequence, there is not a practical algorithm that can even allow you to state that the next bit will be 1 (or 0) with probability greater than 1/2. For all practical purposes, the sequence is unpredictable. The security of BBS is based on the difficulty of factoring n. That is, given n, we need to determine its two prime factors p and q. 

Introduction:-A true random number generator (TRNG) uses a nondeterministic source to produce randomness. Most operate by measuring unpredictable natural processes, such as pulse detectors of ionizing radiation events, gas discharge tubes, and leaky capacitors. Intel has developed a commercially available chip that samples thermal noise by amplifying the voltage measured across undriven resistors. A group at Bell Labs has developed a technique that uses the variations in the response time of raw read requests for one disk sector of a hard disk. LavaRnd is an open source project for creating truly random numbers using inexpensive cameras, open source code, and inexpensive hardware. The system uses a saturated CCD in a light-tight can as a chaotic source to produce the seed. Software processes the result into truly random numbers in a variety of formats.

There are problems both with the randomness and the precision of such numbers, to say nothing of the clumsy requirement of attaching one of these devices to every system in an internetwork. Another alternative is to dip into a published collection of good-quality random numbers. However, these collections provide a very limited source of numbers compared to the potential requirements of a sizable network security application. Furthermore, although the numbers in these books do indeed exhibit statistical randomness, they are predictable, because an opponent who knows that the book is in use can obtain a copy.

Skew

A true random number generator may produce an output that is biased in some way, such as having more ones than zeros or vice versa. Various methods of modifying a bit stream to reduce or eliminate the bias have been developed. These are referred to as deskewing algorithms. One approach to deskew is to pass the bit stream through a hash function such as MD5 or SHA-1. The hash function produces an n-bit output from an input of arbitrary length. For deskewing, blocks of m input bits, with m≥n can be passed through the hash function.

 Introduction: -The use of a key distribution center is based on the use of a hierarchy of keys. At a minimum, two levels of keys are used. Communication between end systems is encrypted using a temporary key, often referred to as a Session Key. Typically, the session key is used for the duration of a logical connection, such as a frame relay connection or transport connection, and then discarded. Each session key is obtained from the key distribution center over the same networking facilities used for end-user communication. Accordingly, session keys are transmitted in encrypted form, using a master key that is shared by the key distribution center and an end system or user.

Key Hierarchy



 

For each end system or user, there is a unique master key that it shares with the key distribution center. Of course, these master keys must be distributed in some fashion. However, the scale of the problem is vastly reduced. If there are N entities that wish to communicate in pairs, then, as was mentioned, as many as [N(N 1)]/2 session keys are needed at any one time. However, only N master keys are required, one for each entity. Thus, master keys can be distributed in some no cryptographic way, such as physical delivery. 


Introduction: -The scheme is useful for providing end-to-end encryption at a network or transport level in a way that is transparent to the end users. The approach assumes that communication makes use of a connection-oriented end-to-end protocol, such as TCP. The noteworthy element of this approach is a session security module (SSM), which may consists of functionality at one protocol layer that performs end-to-end encryption and obtains session keys on behalf of its host or terminal.

The steps involved in establishing a connection are shown in the figure. When one host wishes to set up a connection to another host, it transmits a connection-request packet (step 1). The SSM saves that packet and applies to the KDC for permission to establish the connection (step 2).

 

The communication between the SSM and the KDC is encrypted using a master key shared only by this SSM and the KDC. If the KDC approves the connection request, it generates the session key and delivers it to the two appropriate SSMs, using a unique permanent key for each SSM (step 3). The requesting SSM can now release the connection request packet, and a connection is set up between the two end systems (step 4). All user data exchanged between the two end systems are encrypted by their respective SSMs using the one-time session key.

The automated key distribution approach provides the flexibility and dynamic characteristics needed to allow a number of terminal users to access a number of hosts and for the hosts to exchange data with each other

Decentralized Key Control-A decentralized approach requires that each end system be able to communicate in a secure manner with all potential partner end systems for purposes of session key distribution. Thu

s, there may need to be as many as [n(n 1)]/2 master keys for a configuration with n end systems.

A session key may be established with the following sequence of steps):

1.

A issues a request to B for a session key and includes a nonce, N1

2.

B responds with a message that is encrypted using the shared master key. The response includes the session key selected by B, an identifier of B, the value f(N1), and another nonce, N2.

3.

Using the new session key, A returns f(N2) to B. 








Introduction:-The concept of a key hierarchy and the use of automated key distribution techniques greatly reduce the number of keys that must be manually managed and distributed. It may also be desirable to impose some control on the way in which automatically distributed keys are used.

For example, in addition to separating master keys from session keys, we may wish to define different types of session keys on the basis of use, such as

  • Data-encrypting key, for general communication across a network
  • PIN-encrypting key, for personal identification numbers (PINs) used in electronic funds transfer and point-of-sale applications
  • File-encrypting key, for encrypting files stored in publicly accessible locations

The value of separating keys by type, consider the risk that a master key is imported as a data-encrypting key into a device. Normally, the master key is physically secured within the cryptographic hardware of the key distribution center and of the end systems. Session keys encrypted with this master key are available to application programs, as are the data encrypted with such session keys. However, if a master key is treated as a session key, it may be possible for an unauthorized application to obtain plaintext of session keys encrypted with that master key.

This technique is for use with DES and makes use of the extra 8 bits in each 64-bit DES key. That is, the 8 non key bits ordinarily reserved for parity checking form the key tag. The bits have the following interpretation:

  • One bit indicates whether the key is a session key or a master key.
  • One bit indicates whether the key can be used for encryption.
  • One bit indicates whether the key can be used for decryption.
  •  The remaining bits are spares for future use.

The drawbacks of this scheme are that (1) the tag length is limited to 8 bits, limiting its flexibility and functionality; and (2) because the tag is not transmitted in clear form, it can be used only at the point of decryption, limiting the ways in which key use can be controlled.

Control Vector Encryption and Decryption

A more flexible scheme, referred to as the control vector. In this scheme, each session key has an associated control vector consisting of a number of fields that specify the uses and restrictions for that session key. The length of the control vector may vary.

The control vector is cryptographically coupled with the key at the time of key generation at the KDC. As a first step, the control vector is passed through a hash function that produces a value whose length is equal to the encryption key length. In essence, a hash function maps values from a larger range into a smaller range, with a reasonably uniform spread.

The hash value is then XORed with the master key to produce an output that is used as the key input for encrypting the session key. 



Introduction:-The concept of a key hierarchy and the use of automated key distribution techniques greatly reduce the number of keys that must be manually managed and distributed. It may also be desirable to impose some control on the way in which automatically distributed keys are used.

For example, in addition to separating master keys from session keys, we may wish to define different types of session keys on the basis of use, such as

  • Data-encrypting key, for general communication across a network
  • PIN-encrypting key, for personal identification numbers (PINs) used in electronic funds transfer and point-of-sale applications
  • File-encrypting key, for encrypting files stored in publicly accessible locations

The value of separating keys by type, consider the risk that a master key is imported as a data-encrypting key into a device. Normally, the master key is physically secured within the cryptographic hardware of the key distribution center and of the end systems. Session keys encrypted with this master key are available to application programs, as are the data encrypted with such session keys. However, if a master key is treated as a session key, it may be possible for an unauthorized application to obtain plaintext of session keys encrypted with that master key.

This technique is for use with DES and makes use of the extra 8 bits in each 64-bit DES key. That is, the 8 non key bits ordinarily reserved for parity checking form the key tag. The bits have the following interpretation:

  • One bit indicates whether the key is a session key or a master key.
  • One bit indicates whether the key can be used for encryption.
  • One bit indicates whether the key can be used for decryption.
  •  The remaining bits are spares for future use.

The drawbacks of this scheme are that (1) the tag length is limited to 8 bits, limiting its flexibility and functionality; and (2) because the tag is not transmitted in clear form, it can be used only at the point of decryption, limiting the ways in which key use can be controlled.

Control Vector Encryption and Decryption

A more flexible scheme, referred to as the control vector. In this scheme, each session key has an associated control vector consisting of a number of fields that specify the uses and restrictions for that session key. The length of the control vector may vary.

The control vector is cryptographically coupled with the key at the time of key generation at the KDC. As a first step, the control vector is passed through a hash function that produces a value whose length is equal to the encryption key length. In essence, a hash function maps values from a larger range into a smaller range, with a reasonably uniform spread.

The hash value is then XORed with the master key to produce an output that is used as the key input for encrypting the session key. 

Introduction:-Mostly users are concerned about security from traffic analysis. Even in commercial applications, traffic analysis may yield information that the traffic generators would like to conceal. The following types of information that can be derived from a traffic analysis attack:-

  •  Identities of partners
  •  How frequently the partners are communicating
  •  Message pattern, message length, or quantity of messages that suggest important information is being exchanged
  •  The events that correlate with special conversations between particular partners

 A covert channel is a means of communication in a fashion unintended by the designers of the communications facility. Typically, the channel is used to transfer information in a way that violates a security policy.

Link Encryption Approach

In link encryption, network-layer headers are encrypted, reducing the opportunity for traffic analysis. However, it is still possible in those circumstances for an attacker to assess the amount of traffic on a network and to observe the amount of traffic entering and leaving each end system. An effective countermeasure to this attack is traffic padding.

                                           Traffic-Padding Encryption Device

Traffic padding produces cipher text output continuously, even in the absence of plaintext. A continuous random data stream is generated. When plaintext is available, it is encrypted and transmitted. When input plaintext is not present, random data are encrypted and transmitted. This makes it impossible for an attacker to distinguish between true data flow and padding and therefore impossible to deduce the amount of traffic.

Traffic padding is essentially a link encryption function. If only end-to-end encryption is employed, then the measures available to the defender are more limited. For example, if encryption is implemented at the application layer, then an opponent can determine which transport entities are engaged in dialogue. If encryption techniques are housed at the transport layer, then network-layer addresses and traffic patterns remain accessible.

One technique that might prove useful is to pad out data units to a uniform length at either the transport or application level. In addition, null messages can be inserted randomly into the stream. These tactics deny opponent knowledge about the amount of data exchanged between end users and obscure the underlying traffic pattern. 


 Public key encryption and Hash Functions

 ==================================

Introduction: - An integer p> 1 is a prime number if and only if its only divisors are ± 1 and ±p. Prime numbers play a critical role in number theory and the table shows the primes less than 2000.

Any integer a> 1 can be factored in a unique way as

Equation 1

Wherep1 <p2< ... <ptis prime numbers and where each is a positive integer. This is known as the fundamental theorem of arithmetic.
 If P is the set of all prime numbers, then any positive integer a can be written uniquely in the following form:

 

The right-hand side is the product over all possible prime numbers p; for any particular value of a, most of the exponents ap will be 0.

The value of any given positive integer can be specified by simply listing all the nonzero exponents in the foregoing formulation.

Multiplication of two numbers is equivalent to adding the corresponding exponents.It follows that kp = ap bp for all p∈P.

 Introduction:-For many cryptographic algorithms, it is necessary to select one or more very large prime numbers at random. Thus we are faced with the task of determining whether a given large number is prime. There is no simple yet efficient means of accomplishing this task.

Miller-Rabin Algorithm

The algorithm due to Miller and Rabin is typically used to test a large number for primality. Before explaining the algorithm, we need some background. First, any positive odd integer n≥ 3 can be expressed as follows:

n1 = 2kq with k> 0, q odd

To see this, note that (n 1) is an even integer. Then, divide (n 1) by 2 until the result is an odd number q, for a total of k divisions. If n is expressed as a binary number, then the result is achieved by shifting the number to the right until the rightmost digit is a 1, for a total of k shifts. We now develop two properties of prime numbers that we will need

Two Properties of Prime Numbers

The first property is stated as follows:

1.If p is prime and a is a positive integer less than p, then a2 mod p = 1 if and only if either a mod p = 1 or a mod p= 1 mode p = p 1. By the rules of modular arithmetic (a mode p) (a mode p) = a2 mod p. Thus if either a mode p = 1 or a mod p = 1, then a2 mod p = 1. Conversely, if a2 mod p = 1, then (a mod p)2 = 1, which is true only for a mod p = 1 or a mod p = 1.

.The second property is stated as follows:

2. Let p be a prime number greater than 2. We can then write p 1 = 2kq, with k> 0 q odd. Let a be any integer in the range 1 <a<p 1. Then one of the two following conditions is true:

  1. aq is congruent to 1 modulo p. That is, aq mod p = 1, or equivalently, aq≡ 1 (mod p).
  2. One of the numbers aq, a2q, a4q,..., a2k-1q is congruent to 1 modulo p. That is, there is some number j in the range (1 ≤ j≤ k) such that a2j-1q mod p = 1 mod p = p 1, or equivalently, a2j-1q≡1 (mod p).

Proof:Fermat's theorm states that an1≡ 1 (mod n) if n is prime. We have p 1 = 2kq. Thus, we know that ap1 mod p = a2kq mod p = 1. Thus, if we look at the sequence of numbers

we know that the last number in the list has value 1. Further, each number in the list is the square of the previous number. Therefore, one of the following possibilities must be true:
The first number on the list, and therefore all subsequent numbers on the list, equals 1.
Some number on the list does not equal 1, but its square mod p does equal 1. By virtue of the first property of prime numbers defined above, we know that the only number that satisfies this condition p 1 is So, in this case, the list contains an element equal to p 1.
The procedure TEST takes a candidate integer n as input and returns the result compositeif n is definitely not a prime, and the result inconclusiveif n may or may not be a prime
TEST (n)

1.  Find integers k, q, with k > 0, q odd, so that (n  1 = 2kq);

2.  Select a random integer a, 1 < a < n  1;

3.  if aq mod n = 1 then return("inconclusive");

4.  for j = 0 to k  1 do

5.  if a2jq mod n ≡ n  1 then return("inconclusive");

6.  return("composite"); 



Introduction:-One of the most useful results of number theory is the Chinese remainder theorem (CRT).In essence, the CRT says it is possible to reconstruct integers in a certain range from their residues modulo a set of pairwise relatively prime moduli.

The 10 integers in Z10, that is the integers 0 through 9, can be reconstructed from their two residues modulo 2 and 5 (the relatively prime factors of 10). Say the known residues of a decimal digit x are r2 = 0 and r5 = 3; that is, x mod 2 =0 and x mod 5 = 3. Therefore, x is an even integer in Z10 whose remainder, on division by 5, is 3. The unique solution is x = 8.

The CRT can be stated in several ways. We present here a formulation that is most useful from the point of view of this text.let

Where the mi are pairwise relatively prime; that is, gcd (mi, mj) = 1 for 1 ≤i, j ≤k, and i ≠j. We can represent any integer A in ZM by a k-tuple whose elements are in Zmi using the following correspondence:

1. The mapping of equation is a one-to-one correspondence (called a bijection) between ZM and the Cartesian product Zm1 x Zm2 x ... x Zmk. That is, for every integer A such that 0 ≤A<M there is a unique k-tuple (a1, a2...ak) with 0 ≤ai<mi that represents it, and for every such k-tuple (a1, a2...ak) there is a unique integer A in ZM.

2.  Operations performed on the elements of ZM can be equivalently performed on the corresponding k-tuples by performing the operation independently in each coordinate position in the appropriate system.

3. Let us demonstrate the first assertion. The transformation from A to (a1, a1... ak) is obviously unique; that is, each ai is uniquely calculated as ai = A mod mi. Computing A from (a1, a1... ak) can be done as follows.

Let Mi = M/mi for 1 ≤i ≤k. Note that Mi = m1 x m2 x ... x mi-1 x mi 1 x ... x mk so that Mi ≡0 (mod mj) for all j ≡i. Then let

By the definition of Mi it is relatively prime to mi and therefore has a unique multiplicative inverse mod miso is well defined and produces a unique value ci. We can now compute:

To show that the value of A produced is correct, we must show that ai = A mod mi for 1 ≤i ≤k. Note that cj ≡Mj ≡0(mod mi) if j ≠i and that ci ≡1(mod mi). It follows that ai = A mod mi.

The second assertion of the CRT, concerning arithmetic operations, follows from the rules for modular arithmetic. That is, the second assertion can be stated as follows: If

Then,

One of the useful features of the Chinese remainder theorem is that it provides a way to manipulate (potentially very large) numbers mod M in terms of tuples of smaller numbers. 


Introduction: -Discrete logarithms are fundamental to a number of public-key algorithms, including Diffie-Hellman key exchange and the digital signature algorithm (DSA).

The Powers of an Integer, Modulo n

Euler's theorem states that, for every a and nthat are relatively prime:

af (n) ≡1 (mod n)   ------------ (1)

Wheref(n), Euler's totient function, is the number of positive integers less than n and relatively prime to n. Now consider the more general expression:

am ≡ 1(mod n)     -------------- (2)

ifa and n are relatively prime, then there is at least one integer m that satisfies Equation (1), namely, m = f(n). The least positive exponent m for which Equation (2) holds is referred to in several ways:

  • the order of a (mod n)
  • the exponent to which a belongs (mod n)
  • the length of the period generated by a

Table shows all the powers of a, modulo 19 for all positive a< 19. The length of the sequence for each base value is indicated by shading. Note the following:

1. All sequences end in 1. This is consistent with the reasoning of the preceding few paragraphs.

2. The length of a sequence divides f (19) = 18. That is, an integral number of sequences occur in each row of the table.

3. Some of the sequences are of length 18. In this case, it is said that the base integer a generates the set of nonzero integers modulo 19. Each such integer is called a primitive root of the modulus 19.

The highest possible exponent to which a number can belong (mod n) is f(n). If a number is of this order, it is referred to as a primitive root of n. The importance of this notion is that if a is a primitive root of n, then its powers a, a2,..., af(n) are distinct (mod n) and are all relatively prime to n. In particular, for a prime number p, if a is a primitive root of p, then a, a2... ap1 are distinct (mod p). For the prime number 19, its primitive roots are 2, 3, 10, 13, 14, and 15.Not all integers have primitive roots. In fact, the only integers with primitive roots are those of the form 2, 4, pa, and 2pa, where p is any odd prime and a is a positive integer. 


Introduction: -The concept of public-key cryptography evolved from an attempt to attack two of the most difficult problems associated with symmetric encryption. The first problem is that of key distribution. Key distribution under symmetric encryption requires either (1) that two communicants already share a key, which somehow has been distributed to them; or (2) the use of a key distribution center.


The second problem that Diffie pondered and one that was of "digital signatures." If the use of cryptography was to become widespread, not just in military situations but for commercial and private purposes, then electronic messages and documents would need the equivalent of signatures used in paper documents.

Diffie and Hellman achieved an astounding breakthrough in 1976 by coming up with a method that addressed both problems and that was radically different from all previous approaches to cryptography.

Public-Key Cryptosystems:-Asymmetric algorithms rely on one key for encryption and a different but related key for decryption. These algorithms have the following important characteristic:

  • It is computationally infeasible to determine the decryption key given only knowledge of the cryptographic algorithm and the encryption key.

A public-key encryption scheme has six ingredients –

 

·  Plaintext: This is the readable message or data that is fed into the algorithm as input.

· Encryption algorithm: The encryption algorithm performs various transformations on the plaintext.

· Public and private keys: This is a pair of keys that have been selected so that if one is used for encryption, the other is used for decryption. The exact transformations performed by the algorithm depend on the public or private key that is provided as input.

·  Ciphertext: This is the scrambled message produced as output. It depends on the plaintext and the key. For a given message, two different keys will produce two different ciphertexts.

· Decryption algorithm: This algorithm accepts the ciphertext and the matching key and produces the original plaintext.

 

The essential steps are the following:

1.      Each user generates a pair of keys to be used for the encryption and decryption of messages.

2.      Each user places one of the two keys in a public register or other accessible file. This is the public key. The companion key is kept private. Each user maintains a collection of public keys obtained from others.

3.      If Bob wishes to send a confidential message to Alice, Bob encrypts the message using Alice's public key.

4.      When Alice receives the message, she decrypts it using her private key. No other recipient can decrypt the message because only Alice knows Alice's private key.

With this approach, all participants have access to public keys, and private keys are generated locally by each participant and therefore need never be distributed. As long as a user's private key remains protected and secret, incoming communication is secure. At any time, a system can change its private key and publish the companion public key to replace its old public key.

The two keys used for asymmetric encryption are referred to as the public key and the private key. The private key is kept secret, but it is referred to as a private key rather than a secret key to avoid confusion with symmetric encryption. The important aspects of symmetric and public-key encryption are as follows:-

Applications for Public-Key Cryptosystems

Public-key systems are characterized by the use of a cryptographic algorithm with two keys, one held private and one available publicly. Depending on the application, the sender uses either the sender's private key or the receiver's public key, or both, to perform some type of cryptographic function. Public-key cryptosystems are classified as follows:-

· Encryption/decryption:The sender encrypts a message with the recipient's public key.

· Digital signature:The sender "signs" a message with its private key. Signing is achieved by a cryptographic algorithm applied to the message or to a small block of data that is a function of the message.

· Key exchange:Two sides cooperate to exchange a session key. Several different approaches are possible, involving the private key(s) of one or both parties. 


Introduction: -The RSA scheme is a block cipher in which the plaintext and ciphertext are integers between 0 and n 1 for some n. A typical size for n is 1024 bits, or 309 decimal digits. That is, n is less than 21024.

Description:-The scheme developed by Rivest, Shamir, and Adleman makes use of an expression with exponentials. Plaintext is encrypted in blocks, with each block having a binary value less than some number n. That is, the block size must be less than or equal to log2 (n); the block size is i bits, where 2i<n2i 1. Encryption and decryption are of the following form, for some plaintext block M and ciphertext block C:

C= Me mod n

M= Cd mod n = (Me)d mod n = Med mod n

Both sender and receiver must know the value of n. The sender knows the value of e, and only the receiver knows the value of d. Thus, this is a public-key encryption algorithm with a public key of PU = {e, n} and a private key of PU = {d, n}. For this algorithm to be satisfactory for public-key encryption, the following requirements must be met:

  1. It is possible to find values of e, d, n such that Med mod n = M for all M < n.
  2. It is relatively easy to calculate mod Me mod n and Cd for all values of M < n.
  3. It is infeasible to determine d given e and n.

For now, we focus on the first requirement and consider the other questions later. We need to find a relationship of the form

Med mod n = M

The preceding relationship holds if e and d are multiplicative inverses modulo f(n), where f(n) is the Euler totient function. For p, q prime, f(pq) = (p 1) (q 1) The relationship between e and d can be expressed as

This is equivalent to saying

ed ≡1 mod f(n)

d ≡e1 mod f(n)

That is, e and d are multiplicative inverses mod f(n).According to the rules of modular arithmetic, this is true only if dis relatively prime to f(n). Equivalently, gcd(f(n),d) = 1

                                            RSA Algorithm

Computational Aspects: - There are actually two issues to consider: encryption/decryption and key generation. Let us look first at the process of encryption and decryption and then consider key generation.

Exponentiation in Modular Arithmetic

Both encryption and decryption in RSA involve raising an integer to an integer power, mod n. If the exponentiation is done over the integers and then reduced modulo n, the intermediate values would be gargantuan. Fortunately, as the preceding example shows, we can make use of a property of modular arithmetic:

[(a mod n) x (b mod n)] mod n = (a x b) mod n

Thus, we can reduce intermediate results modulo n. This makes the calculation practical.
Another consideration is the efficiency of exponentiation, because with RSA we are dealing with potentially large exponents. To see how efficiency might be increased, consider that we wish to compute x16. A straightforward approach requires 15 multiplications:

x16 = xxxxxxxxxxx x x x x x x x x x x x x x x x x x x x x

However, we can achieve the same final result with only four multiplications if we repeatedly take the square of each partial result, successively forming x2, x4, x8, x16. As another example, suppose we wish to calculate x11 mod n for some integers x and n. Observe that x11 = x1 2 8 = (x)(x2)(x8). In this case we compute x mod n, x2 mod n, x4 mod n, and x8 mod n and then calculate [(x mod n) x (x2 mod n) x (x8 mod n) mod n.

More generally, suppose we wish to find the value ab with a and b positive integers. If we express b as a binary number bk bk1 ... b0 then we have

The message M to be encrypted is padded. A set of optional parameters P is passed through a hash function H.The output is then padded with zeros to get the desired length in the overall data block (DB). Next, a random seed is generated and passed through another hash function, called the mask generating function (MGF). The resulting hash value is bit-by-bit XORed with DB to produce a maskedDB. The maskedDB is in turn passed through the MGF to form a hash that is XORed with the seed to produce the masked seed. The concatenation of the maskedseed and the maskedDB forms the encoded message EM. Note that the EM includes the padded message, masked by the seed, and the seed, masked by the maskedDB. The EM is then encrypted using RSA.

Encryption Using Optimal Assymetric Encryption Padding (OAEP)  


Introduction:-One of the major roles of public-key encryption has been to address the problem of key distribution. There are actually two distinct aspects to the use of public-key cryptography in this regard:

  • The distribution of public keys
  • The use of public-key encryption to distribute secret keys

Distribution of Public Keys

Several techniques have been proposed for the distribution of public keys. Virtually all these proposals can be grouped into the following general schemes:

  • Public announcement
  • Publicly available directory
  • Public-key authority
  • Public-key certificates

1. Public Announcement of Public Keys- The point of public-key encryption is that the public key is public. Thus, if there is some broadly accepted public-key algorithm, such as RSA, any participant can send his or her public key to any other participant or broadcast the key to the community at large.

Although this approach is convenient, it has a major weakness. Anyone can forge such a public announcement. That is, some user could pretend to be user A and send a public key to another participant or broadcast such a public key.

2. Publicly Available Directory-A greater degree of security can be achieved by maintaining a publicly available dynamic directory of public keys.Such a scheme would include the following elements:

  • The authority maintains a directory with a {name, public key} entry for each participant.
  • Each participant registers a public key with the directory authority. Registration would have to be in person or by some form of secure authenticated communication.
  • A participant may replace the existing key with a new one at any time, either because of the desire to replace a public key that has already been used for a large amount of data, or because the corresponding private key has been compromised in some way.
  • Participants could also access the directory electronically. For this purpose, secure, authenticated communication from the authority to the participant is mandatory.

3. Public-Key Authority-The scenario assumes that a central authority maintains a dynamic directory of public keys of all participants. In addition, each participant reliably knows a public key for the authority, with only the authority knowing the corresponding private key. The following steps occur:-

  • A sends a time stamped message to the public-key authority containing a request for the current public key of B.
  • The authority responds with a message that is encrypted using the authority's private key, PRauth Thus, A is able to decrypt the message using the authority's public key. Therefore, A is assured that the message originated with the authority. The message includes the following:
  1. B's public key, PUb which A can use to encrypt messages destined for B
  2. The original request, to enable A to match this response with the corresponding earlier request and to verify that the original request was not altered before reception by the authority.
  • The original timestamp, so A can determine that this is not an old message from the authority containing a key other than B's current public key.
  • A stores B's public key and also uses it to encrypt a message to B containing an identifier of A (IDA) and a nonce (N1), which is used to identify this transaction uniquely.
  • B retrieves A's public key from the authority in the same manner as A retrieved B's public key.
  •  At this point, public keys have been securely delivered to A and B, and they may begin their protected exchange. However, two additional steps are desirable:
  •  B sends a message to A encrypted with PUa and containing A's nonce (N1) as well as a new nonce generated by B (N2) Because only B could have decrypted message (3), the presence of N1 in message (6) assures A that the correspondent is B.
  •  A returns N2, encrypted using B's public key, to assure B that its correspondent is A.Thus, a total of seven messages are required. However, the initial four messages need be used only infrequently because both A and B can save the other's public key for future use, a technique known as caching. Periodically, a user should request fresh copies of the public keys of its correspondents to ensure currency.

4. Public-Key Certificates-The scenario  is attractive, yet it has some drawbacks. The public-key authority could be somewhat of a bottleneck in the system, for a user must appeal to the authority for a public key for every other user that it wishes to contact. As before, the directory of names and public keys maintained by the authority is vulnerable to tampering.

An alternative approach, is to use certificates that can be used by participants to exchange keys without contacting a public-key authority, in a way that is as reliable as if the keys were obtained directly from a public-key authority. In essence, a certificate consists of a public key plus an identifier of the key owner, with the whole block signed by a trusted third party. Typically, the third party is a certificate authority, such as a government agency or a financial institution, that is trusted by the user community. A user can present his or her public key to the authority in a secure manner, and obtain a certificate. The user can then publish the certificate. Anyone needed this user's public key can obtain the certificate and verify that it is valid by way of the attached trusted signature. A participant can also convey its key information to another by transmitting its certificate. Other participants can verify that the certificate was created by the authority.

This is perhaps the simplest approaches of encryption/decryption using elliptic curves. The first task in this system is to encode the plaintext message m to be sent as an x-y point Pm. It is the point Pm that will be encrypted as a ciphertext and subsequently decrypted. But we cannot simply encode the message as the x or y coordinate of a point.

As with the key exchange system, an encryption/decryption system requires a point G and an elliptic group Eq(a, b) as parameters. Each user A selects a private key nA and generates a public key PA = nA x G.

To encrypt and send a message Pm to B, A chooses a random positive integer k and produces the ciphertext Cm consisting of the pair of points:

Cm = {kG, Pm kPB}

A has used B's public key PB. To decrypt the ciphertext, B multiplies the first point in the pair by B's secret key and subtracts the result from the second point:

Pm kPBnB(kG) = Pm k(nBG) nB(kG) = Pm

A has masked the message Pm by adding kPB to it. Nobody but A knows the value of k, so even though PB is a public key, nobody can remove the mask kPB. However, A also includes a "clue," which is enough to remove the mask if one knows the private key nB. For an attacker to recover the message, the attacker would have to compute k given G and kG, which is assumed hard. 


Introduction:-The principal of ECC(elliptic curve cryptography), compared to RSA, is that it appears to offer equal security for a far smaller key size, thereby reducing processing overhead. On the other hand, the confidence level in ECC is not yet as high as that in RSA.

ECC is fundamentally more difficult to explain than either RSA or Diffie-Hellman.

Abelian Groups:-An abeliangroupG, sometimes denoted by {G, • }, is a set of elements with a binary operation, denoted by •, that associates to each ordered pair (a, b) of elements in G an element (a • b) in G, such that the following axioms are obeyed:

(A1) Closure:

 If a and b belong to G, then a • b is also in G.

 (A2) Associative:

 a • (b • c) = (a • b) • c for all a, b, c in G.

 (A3) Identity element:

 There is an element e in G such that a • e = e • a = a for all a in G.

 (A4) Inverse element:

 For each a in G there is an element a' in G such that a • a' = a' • a = e.

 (A5) Commutative:

 a • b = b • a for all a, b in G.

A number of public-key ciphers are based on the use of an abelian group. For example, Diffie-Hellman key exchange involves multiplying pairs of nonzero integers modulo a prime number q. Keys are generated by exponentiation over the group, with exponentiation defined as repeated multiplication.

 


Suppose Alice and Bob wish to exchange keys, and Darth is the adversary. The attack proceeds as follows:

1. Darth prepares for the attack by generating two random private keys XD1 and XD2 and then computing the corresponding public keys YD1 and YD2.

2. Alice transmits YA to Bob.

3. Darth intercepts YA and transmits YD1 to Bob. Darth also calculates K2 = (YA)XD2 mod q.

4. Bob receives YD1 and calculates K1 = (YD1)XE mod q.

5. Bob transmits XA to Alice.

6. Darth intercepts XA and transmits YD2 to Alice. Darth calculates K1 = (YB)XD1 mod q.

7. Alice receives YD2 and calculates K2 = (YD2)XA mod q.

At this point, Bob and Alice think that they share a secret key, but instead Bob and Darth share secret key K1 and Alice and Darth share secret key K2. All future communication between Bob and Alice is compromised in the following way:

1. Alice sends an encrypted message M: E(K2, M).

2. Darth intercepts the encrypted message and decrypts it, to recover M.

3. Darth sends Bob E(K1, M) or E(K1, M'), where M' is any message. In the first case, Darth simply wants to eavesdrop on the communication without altering it. In the second case, Darth wants to modify the message going to Bob.

The key exchange protocol is vulnerable to such an attack because it does not authenticate the participants. This vulnerability can be overcome with the use of digital signatures and public-key certificates. 

Introduction: -The addition operation in ECC is the counterpart of modular multiplication in RSA, and multiple addition is the counterpart of modular exponentiation. To form a cryptographic system using elliptic curves, we have to find a "hard problem" corresponding to factoring the product of two primes or taking the discrete logarithm.

Consider the equation Q = kP where Q, P∈Ep(a, b) and k<p. It is relatively easy to calculate Q given k and P, but it is relatively hard to determine k given Q and P. This is called the discrete logarithm problem for elliptic curves. Consider the group E23(9, 17). This is the group defined by the equation y2 mod 23 = (x3 9x 17) mod 23. What is the discrete logarithm k of Q = (4, 5) to the base P = (16.5)?

The brute-force method is to compute multiples of P until Q is found.

Thus, P = (16, 5); 2P = (20, 20); 3P = (14, 14); 4P = (19, 20); 5P = (13, 10); 6P = (7, 3); 7P = (8, 7); 8P (12, 17); 9P = (4, 5).Because 9P = (4, 5) = Q, the discrete logarithm Q = (4, 5) to the base P = (16, 5) is k = 9. In a real application, k would be so large as to make the brute-force approach infeasible.

Analog of Diffie-Hellman Key Exchange:-Key exchange using elliptic curves can be done in the following manner. First pick a large integer q, which is either a prime number p or an integer of the form 2m. This defines the elliptic group of points Eq(a, b). Next, pick a base pointG = (x1, y1) in Ep(a, b) whose order is a very large value n. The ordern of a point G on an elliptic curve is the smallest positive integer n such that nG = O. Eq(a, b) and G are parameters of the cryptosystem known to all participants.

A key exchange between users A and B can be accomplished as follows:-

  1. A selects an integer nA less than n. This is A's private key. A then generates a public key PA = nA x G; the public key is a point in Eq(a, b).
  2. B similarly selects a private key nB and computes a public key PB.
  3. A generates the secret key K = nA x PB. B generates the secret key K = nB x PA.

 

 The two calculations in step 3 produce the same result because nA x PB = nA x (nB x G) = nB x (nA x G) = nB x PA.To break this scheme, an attacker would need to be able to compute k given G and kG, which is assumed hard.

As an example, take p = 211; Ep(0, 4), which is equivalent to the curve y2 = x3 4; and G = (2, 2). One can calculate that 240G = O. A's private key is nA = 121, so A's public key is PA = 121(2, 2) = (115, 48). B's private key is nB = 203, so B's public key is 203(2, 2) = (130, 203). The shared secret key is 121(130, 203) = 203(115, 48) = (161, 69).

The secret key is a pair of numbers. If this key is to be used as a session key for conventional encryption, then a single number must be generated. We could simply use the x coordinates or some simple function of the x coordinate .

This is perhaps the simplest approaches of encryption/decryption using elliptic curves. The first task in this system is to encode the plaintext message m to be sent as an x-y point Pm. It is the point Pm that will be encrypted as a ciphertext and subsequently decrypted. But we cannot simply encode the message as the x or y coordinate of a point.

As with the key exchange system, an encryption/decryption system requires a point G and an elliptic group Eq(a, b) as parameters. Each user A selects a private key nA and generates a public key PA = nA x G.

To encrypt and send a message Pm to B, A chooses a random positive integer k and produces the ciphertext Cm consisting of the pair of points:

Cm = {kG, Pm kPB}

A has used B's public key PB. To decrypt the ciphertext, B multiplies the first point in the pair by B's secret key and subtracts the result from the second point:

Pm kPBnB(kG) = Pm k(nBG) nB(kG) = Pm

A has masked the message Pm by adding kPB to it. Nobody but A knows the value of k, so even though PB is a public key, nobody can remove the mask kPB. However, A also includes a "clue," which is enough to remove the mask if one knows the private key nB. For an attacker to recover the message, the attacker would have to compute k given G and kG, which is assumed hard. 


   The Example cryptography : QR codes

    ==============================


                Learning with QR Codes : 


1. QR code or Quick Response Code was developed In 1994, by Japanese company DENSO WAVE. This kind of barcode can be scanned by smart phones to link to a webpage . 

This technology is now a normal part of how we communicate and share information. It can also be a way of making lessons more interactive. 


Generating a QR code

The main platforms we are going to look at are; Padlet, MS Forms and Mentimeter. Although other platforms are likely to generate a QR code in similar ways.


1. Padlet – click on the share tab in your chosen padlet board and select “get QR code” from the share menu. Then when the image appears right click and save the image. 

2. Mentimeter – Open the slide you want to share and click on the share tab. A box will pop up. Click on “download QR” and the image will go into your download area on your device. 

3. Microsoft Forms – Forms is so adaptable and perfect for utilizing QR codes. Once you have built your Form, click on “Collect Responses”. A draw menu will slide out and you will see in the middle 4 icons. One looks like a QR code. When your mouse hovers over it then text will appear to confirm to you it’s the “QR code” icon. Click on the icon and the code will appear with a button to enable you to download the image. Another Microsoft application with a QR code is SWAY. 

What if the application doesn’t generate a QR code?

Miro for example doesn’t have a built in QR code generator. However there are a number of free QR code generator websites. A few tips before selecting which one to use.


Some you will need to sign up for which could lead to spam in your inbox but you do keep all your codes for later use.

Some will have lots of pop ups for free software. Don’t get caught out.

Some are just complicated and more time consuming that they need to be.

An easy one to use which requires no sign up is QRcode Monkey. This allows you to adapt the code’s colour as well as add logos. You can generate a QR code with anything that has a URL. 

2. A QR code, quick-response code, is a type of two-dimensional matrix barcode invented in 1994 by Masahiro Hara of Japanese company Denso Wave for labelling automobile parts. It features black squares on a white background with fiducial markers, readable by imaging devices like cameras, and processed using Reed–Solomon error correction until the image can be appropriately interpreted. The required data is then extracted from patterns that are present in both the horizontal and the vertical components of the QR image.


Whereas a barcode is a machine-readable optical image that contains information specific to the labeled item, the QR code contains the data for a locator, an identifier, and web-tracking. To store data efficiently, QR codes use four standardized modes of encoding: numeric, alphanumeric, byte or binary, and kanji. Compared to standard UPC barcodes, the QR labeling system was applied beyond the automobile industry because of faster reading of the optical image and greater data-storage capacity in applications such as product tracking, item identification, time tracking, document management, and general marketing. 

The initial alternating-square design presented by the team of researchers, headed by Masahiro Hara, was influenced by the black counters and the white counters played on a Go board; the pattern of the position detection markers was determined by finding the least-used sequence of alternating black-white areas on printed matter, which was found to be . The functional purpose of the QR code system was to facilitate keeping track of the types and numbers of automobile parts, by replacing individually-scanned bar-code labels on each box of auto parts with a single label that contained the data of each label. The quadrangular configuration of the QR code system consolidated the data of the various bar-code labels with Kanji, Kana, and alphanumeric codes printed onto a single label. 

As of 2024, QR codes are used in a much broader context, including both commercial tracking applications and convenience-oriented applications aimed at mobile phone users (termed mobile tagging). QR codes may be used to display text to the user, to open a webpage on the user's device, to add a vCard contact to the user's device, to open a Uniform Resource Identifier (URI), to connect to a wireless network, or to compose an email or text message. There are a great many QR code generators available as software or as online tools that are either free or require a paid subscription. The QR code has become one of the most-used types of two-dimensional code. 


3. You can create a QR code using various online tools or libraries. If you want to generate a QR code programmatically, here’s an example using Python with the qrcode library:


# First, install the qrcode library if you haven't already 

# pip install qrcode[pil] 

 

import qrcode 

 

# Data to encode 

data = "https://www.example.com" 

 

# Generate QR code 

qr = qrcode.QRCode(version=1, box_size=10, border=5) 

qr.add_data(data) 

qr.make(fit=True) 

 

# Create an image from the QR Code instance 

img = qr.make_image(fill_color="black", back_color="white") 

 

# Save the image 

img.save("qrcode.png") 

This code will generate a QR code for the URL "https://www.example.com" and save it as qrcode.png. You can customize the data variable to encode different information. 


    

 


  The example cryptography : google 2 step

                         Authentication

====================================

verify with generated code

Completing Google Sign-In in the robot

Now that we have a working authenticator app, as an example I will demonstrate how to complete the Google Sign-In on https://cloud.robocorp.com.

Libraries

We need to use multiple libraries for this robot:

  • RPA.Browser.Selenium: used to access https://cloud.robocorp.com and to interact with page elements.
  • RPA.Robocorp.Vault: used to access the user's Vault in Control Room.
  • Robot Framework core library String: used for regular expression matching.
  • Robot Framework core library Process: used to Run Process and read stdout content from those processes.
  • Robot Framework core library OperatingSystem: used to Remove File which is created during robot run.

As a Task Teardown, we will close all browsers opened during the execution of the task.

*** Settings ***
Library RPA.Browser.Selenium
Library RPA.Robocorp.Vault
Library String
Library Process
Library OperatingSystem
Task Teardown Close All Browsers

Variables

We will set some variables to not repeat text and to allow easier robot parameter modification. All sensitive variables have been stored into Control Room Vault. The ${GOOGLEAUTH_TITLE} text need to match your account language setting.

*** Variables ***
${ROBOCORP_CLOUD} https://cloud.robocorp.com
${GOOGLEAUTH_TITLE} Sign in with Google
${GOOGLEAUTH_ID_NEXT} //div[@id="identifierNext"]//button
${GOOGLEAUTH_PASSWD_NEXT} //div[@id="passwordNext"]//button
${GOOGLEAUTH_TOTP_NEXT} //div[@id="totpNext"]//button

Keywords

Keyword Input text and proceed is used to incapsulate repeatable actions during the Google Sign-In process. As arguments, it gets an ${element} locator and the ${text} to input into the locator.

*** Keywords ***
Input text and proceed
[Arguments] ${element} ${text}
Wait until page contains element ${element}
Input Text ${element} ${text}

The keyword Get Authenticator Code will execute authenticator using the shell, whilst echo pipes the passphrase into the authenticator command.

Another keyword argument ${googleuser} is used to find the code in the authenticator output by regex matching, which the keyword then returns.

*** Keywords ***
Get Authenticator Code
[Documentation] Authenticator needs to be set up for Google account
[Arguments] ${googleuser} ${passphrase}
${curr_path}= Replace String ${CURDIR} / \\/
${result}= Run Process echo ${passphrase} | authenticator --data ${curr_path}\\/authdata generate --refresh once shell=TRUE
${match} Get Regexp Matches ${result.stdout} ${googleuser}: ([\\d]{6}) 1
Notebook Print MATCH: ${match}[0]
[Return] ${match}[0]

The Keyword Complete Google Signin is responsible for completing the Google Sign-In. This keyword gets the argument ${startelement}, which defines the element locator starting the process. The default value has been set here to the locator needed for the Control Room sign-in process.

The keyword reads secrets from Control Room Vault before it starts to interact with page elements.

Before the final stage of inputting the Authenticator code for 2-step verification, the code is retrieved with the Get Authenticator Code keyword.

*** Keywords ***
Complete Google Signin
[Arguments] ${startelement}=link=Sign in with Google
${secret} Get Secret gmail
Wait until page contains element ${startelement}
Click Element ${startelement}
Switch window ${GOOGLEAUTH_TITLE}
Input Text //input[@type="email"] ${secret}[account_user]
Click Button ${GOOGLEAUTH_ID_NEXT}
Input text and proceed //input[@type="password"] ${secret}[account_password]
Click Button ${GOOGLEAUTH_PASSWD_NEXT}
${code} Get Authenticator Code ${secret}[account_user] ${secret}[authenticator_passphrase]
Input text and proceed //input[@type="tel"] ${code}
Click Button ${GOOGLEAUTH_TOTP_NEXT}

Task

We have a simple task that will open https://cloud.robocorp.com and sign-in using Google Sign-In process. As proof we take a screenshot when the page contains the Control Room welcome text Welcome Mika! after successful sign-in.

Control Room login page

*** Tasks ***
Complete Control Room Google Sign-In
Open Available Browser ${CONTROL_ROOM}
Complete Google Signin
Wait Until Page Contains Welcome Mika!
Capture Page Screenshot

Here is a Robot file containing all the code for the robot above: task.robot


                 MOBILE COMPUTING

                 ==================


Introduction: The rapidly expanding technology of cellular communication, wireless LANs, and satellite services will make information accessible anywhere and at any time. Regardless of size, most mobile computers will be equipped with a wireless connection to the fixed part of the network, and, perhaps, to other mobile computers. The resulting computing environment, which is often referred to as mobile or nomadic computing, no longer requires users to maintain a fixed and universally known position in the network and enables almost unrestricted mobility. Mobility and portability will create an entire new class of applications and, possibly, new massive markets combining personal computing and consumer electronics.

Mobile Computing is an umbrella term used to describe technologies that enable people to access network services anyplace, anytime, and anywhere.

A communication device can exhibit any one of the following characteristics:

  • Fixed and wired: This configuration describes the typical desktop computer in an office. Neither weight nor power consumption of the devices allow for mobile usage. The devices use fixed networks for performance reasons.
  • Mobile and wired: Many of today’s laptops fall into this category; users carry the laptop from one hotel to the next, reconnecting to the company’s network via the telephone network and a modem.
  • Fixed and wireless: This mode is used for installing networks, e.g., in historical buildings to avoid damage by installing wires, or at trade shows to ensure fast network setup.
  • Mobile and wireless: This is the most interesting case. No cable restricts the user, who can roam between different wireless networks. Most technologies discussed in this book deal with this type of device and the networks supporting them. Today’s most successful example for this category is GSM with more than 800 million users.

APPLICATIONS OF MOBILE COMPUTING:

1.Vehicles: Music, news, road conditions, weather reports, and other broadcast information are received via digital audio broadcasting (DAB) with 1.5 Mbit/s. For personal communication, a universal mobile telecommunications system (UMTS) phone might be available offering voice and data connectivity with 384 Kbit/s. The current position of the car is determined via the global positioning system (GPS). Cars driving in the same area build a local ad-hoc network for the fast exchange of information in emergency situations or to help each other keep a safe distance. In case of an accident, not only will the airbag be triggered, but the police and ambulance service will be informed via an emergency call to a service provider. Buses, trucks, and trains are already transmitting maintenance and logistic information to their home base, which helps to improve organization (fleet management), and saves time and money.

2.Emergencies: An ambulance with a high-quality wireless connection to a hospital can carry vital information about injured persons to the hospital from the scene of the accident. All the necessary steps for this particular type of accident can be prepared and specialists can be consulted for an early diagnosis. Wireless networks are the only means of communication in the case of natural disasters such as hurricanes or earthquakes. In the worst cases, only decentralized, wireless ad-hoc networks survive.

3.Business: Managers can use mobile computers say, critical presentations to major customers. They can access the latest market share information. At a small recess, they can revise the presentation to take advantage of this information. They can communicate with the office about possible new offers and call meetings for discussing responds to the new proposals. Therefore, mobile computers can leverage competitive advantages. A travelling salesman today needs instant access to the company’s database: to ensure that files on his or her laptop reflect the current situation, to enable the company to keep track of all activities of their travelling employees, to keep databases consistent etc. With wireless access, the laptop can be turned into a true mobile office, but efficient and powerful synchronization mechanisms are needed to ensure data consistency.

4.Credit Card Verification: At Point of Sale (POS) terminals in shops and supermarkets, when customers use credit cards for transactions, the intercommunication required between the bank central computer and the POS terminal, in order to effect verification of the card usage, can take place quickly and securely over cellular channels using a mobile computer unit. This can speed up the transaction process and relieve congestion at the POS terminals.

5. Replacement of Wired Networks:wireless networks can also be used to replace wired networks, e.g., remote sensors, for tradeshows, or in historic buildings. Due to economic reasons, it is often impossible to wire remote sensors for weather forecasts, earthquake detection, or to provide environmental information. Wireless connections, e.g., via satellite. Other examples for wireless networks are computers, sensors, or information displays in historical buildings, where excess cabling may destroy valuable walls or floors.

6.Infotainment:wireless networks can provide up-to-date information at any appropriate location. The travel guide might tell you something about the history of a building (knowing via GPS, contact to a local base station, or triangulation where you are) downloading information about a concert in the building at the same evening via a local

Wireless network. 

Introduction: Mobile Computing Services allows mobile workforce to access a full range of corporate services and information from anywhere, at any time and it improves the productivity of a mobile workforce by connecting them to corporate information systems and by automating paper-based processes but this Service having some limitations too.

  • Resource constraints: Battery
  • Interference: Radio transmission cannot be protected against interference using shielding and result in higher loss rates for transmitted data or higher bit error rates respectively
  • Bandwidth: Although they are continuously increasing, transmission rates are still very low for wireless devices compared to desktop systems. Researchers look for more efficient communication protocols with low overhead.
  • Dynamic changes in communication environment: variations in signal power within a region, thus link delays and connection losses
  • Network Issues: discovery of the connection-service to destination and connection stability
  • Interoperability issues: the varying protocol standards
  • Security constraints: Not only can portable devices be stolen more easily, but theradio interface is also prone to the dangers of eavesdropping. Wireless access mustalways include encryption, authentication, and other security mechanisms that mustbe efficient and simple to use. 

Introduction: The protocol stack implemented in the system according to the reference model shows in the Figure. End-systems, such as the PDA and computer in the example, need a full protocol stack comprising the application layer, transport layer, network layer, data link layer, and physical layer. Applications on the end-systems communicate with each other using the lower layer services. Intermediate systems, such as the interworking unit, do not necessarily need all of the layers.

Physical layer:This is the lowest layer in a communication system and is responsible for the conversion of a stream of bits into signals that can be transmitted on the sender side. The physical layer of the receiver then transforms the signals back into a bit stream. For wireless communication, the physical layer is responsible for frequency selection, generation of the carrier frequency, signal detection (although heavy interference may disturb the signal), modulation of data onto a carrier frequency and (depending on the transmission scheme) encryption.

Data link layer:The main tasks of this layer include accessing the medium, multiplexing of different data streams, correction of transmission errors, and synchronization (i.e., detection of a data frame). Altogether, the data link layer is responsible for a reliable point-to-pointconnection between two devices or a point-to-multipoint connection between one sender and several receivers.

Network layer: This third layer is responsible for routing packets through a network or establishing a connection between two entities over many other intermediate systems. Important functions are addressing, routing, device location, and handover between different networks.

Transport layer: This layer is used in the reference model to establish an end-to-end connection

Application layer: Finally, the applications (complemented by additional layers that can support applications) are situated on top of all transmission oriented layers. Functions are service location, support for multimedia applications, adaptive applications that can handle the large variations in transmission characteristics, and wireless access to the world-wide web using a portable device. 




Introduction: The most interesting interface in a GSM system is Um, the radio interface, as it comprises various multiplexing and media access mechanisms. GSM implements SDMA using cells with BTS and assigns an MS to a BTS.

Each of the 248 channels is additionally separated in time via a GSM TDMA frame, i.e., each 200 kHz carrier is subdivided into frames that are repeated continuously. The duration of a frame is 4.615 ms.A frame is again subdivided into 8 GSM time slots, where each slot represents a physical TDM channel and lasts for 577 μs. Each TDM channel occupies the 200 kHz carrier for 577 μs every 4.615 ms. Data is transmitted in small portions, called bursts. The following figure shows a so called normal burst as used for data transmission inside a time slot. As shown, the burst is only 546.5 μs long and contains 148 bits. The remaining 30.5 μs are used as guard space to avoid overlapping with other bursts due to different path delays and to give the transmitter time to turn on and off.

The first and last three bits of a normal burst (tail) are all set to 0 and can be used to enhance the receiver performance. The training sequence in the middle of a slot is used to adapt the parameters of the receiver to the current path propagation characteristics and to select the strongest signal in case of multi-path propagation. A flag indicates whether the data field contains user or network control data.

Apart from the normal burst, ETSI (1993a) defines four more bursts for data transmission: a frequency correction burst allows the MS to correct the local oscillator to avoid interference with neighboring channels, a synchronization burst with an extended training sequence synchronizes the MS with the BTS in time, an access burst is used for the initial connection setup between MS and BTS, and finally a dummy burst is used if no data is available for a slot.  

Introduction: The fundamental feature of the GSM system is the automatic, worldwide localization of users for which, the system performs periodic location updates. The HLR always contains information about the current location and the VLR currently responsible for the MS informs the HLR about the location changes. Changing VLRs with uninterrupted availability is called roaming. Roaming can take place within a network of one provider, between two providers in a country and also between different providers in different countries.

To locate and address an MS, several numbers are needed:

Mobile station international ISDN number (MSISDN):- The only important number for a user of GSM is the phone number. This number consists of the country code (CC), the national destination code (NDC) and the subscriber number (SN).

International mobile subscriber identity (IMSI): GSM uses the IMSI for internal unique identification of a subscriber. IMSI consists of a mobile country code (MCC), the mobile network code (MNC), and finally the mobile subscriber identification number (MSIN).

Temporary mobile subscriber identity (TMSI): To hide the IMSI, which would give away the exact identity of the user signaling over the air interface, GSM uses the 4 byte TMSI for local subscriber identification.

Mobile station roaming number (MSRN): Another temporary address that hides the identity and location of a subscriber is MSRN. The VLR generates this address on request from the MSC, and the address is also stored in the HLR. MSRN contains the current visitor country code (VCC), the visitor national destination code (VNDC), the identification of the current MSC together with the subscriber number. The MSRN helps the HLR to find a subscriber for an incoming call.

For mobile terminated call (MTC),the following figure shows the different steps that take place:

step 1: User dials the phone number of a GSM subscriber.

step 2: The fixed network (PSTN) identifies the number belongs to a user in GSM network and forwards the call setup to the Gateway MSC (GMSC).

step 3: The GMSC identifies the HLR for the subscriber and signals the call setup to HLR

step 4: The HLR checks for number existence and its subscribed services and requests an MSRN from the current VLR.

step 5: VLR sends the MSRN to HLR

step 6: Upon receiving MSRN, the HLR determines the MSC responsible for MS and forwards the information to the GMSC

step 7: The GMSC can now forward the call setup request to the MSC indicated

step 8: The MSC requests the VLR for the current status of the MS

step 9: VLR sends the requested information

step 10: If MS is available, the MSC initiates paging in all cells it is responsible for.

step 11: The BTSs of all BSSs transmit the paging signal to the MS

step 12: Step 13: If MS answers, VLR performs security checks

step 15: Till step 17: Then the VLR signals to the MSC to setup a connection to the MS.


Introduction: Frequency division multiplexing (FDM) describes schemes to subdivide the frequency dimension into several non-overlapping frequency bands.

Frequency Division Multiple Access is a method employed to permit several users to transmit simultaneously on one satellite transponder by assigning a specific frequency within the channel to each user. Each conversation gets its own, unique, radio channel. The channels are relatively narrow, usually 30 KHz or less and are defined as either transmit or receive channels. A full duplex conversation requires a transmit& receive channel pair. FDM is often used for simultaneous access to the medium by base station and mobile station in cellular networks establishing a duplex channel. A scheme called frequency division duplexing (FDD) in which the two directions, mobile station to base station and vice versa are now separated using different frequencies.

The two frequencies are also known as uplink, i.e., from mobile station to base station or from ground control to satellite, and as downlink, i.e., from base station to mobile station or from satellite to ground control. The basic frequency allocation scheme for GSM is fixed and regulated by national authorities. All uplinks use the band between 890.2 and 915 MHz, all downlinks use 935.2 to 960 MHz. According to FDMA, the base station, shown on the right side, allocates a certain frequency for up- and downlink to establish a duplex channel with a mobile phone. Up- and downlink have a fixed relation. If the uplink frequency is fu = 890 MHz n·0.2 MHz, the downlink frequency is fd = fu 45 MHz,

i.e., fd = 935 MHz n·0.2 MHz for a certain channel n. The base station selects the channel. Each channel (uplink and downlink) has a bandwidth of 200 kHz.

This scheme also has disadvantages. While radio stations broadcast 24 hours a day, mobile communication typically takes place for only a few minutes at a time. Assigning a separate frequency for each possible communication scenario would be a tremendous waste of (scarce) frequency resources. Additionally, the fixed assignment of a frequency to a sender makes the scheme very inflexible and limits the number of senders.

FDMA/TDD in CT2

Using FDMA, CT2 system splits the available bandwidth into radio channels in the assigned frequency domain. In the initial call setup, the handset scans the available channels and locks on to an unoccupied channel for the duration of the call. Using TDD(Time Division Duplexing ), the call is split into time blocks that alternate between transmitting and receiving.

 

 

 

FDMA and Near-Far Problem

The near-far problem is one of detecting or filtering out a weaker signal amongst stronger signals. The near-far problem is particularly difficult in CDMA systems where transmitters share transmission frequencies and transmission time. In contrast,

FDMA and TDMA systems are less vulnerable. FDMA systems offer different kinds of solutions to near-far challenge. Here, the worst case to consider is recovery of a weak signal in a frequency slot next to strong signal. Since both signals are present simultaneously as a composite at the input of a gain stage, the gain is set according to the level of the stronger signal; the weak signal could be lost in the noise floor. Even if subsequent stages have a low enough noise floor to provide

 Introduction: Frequency division multiplexing (FDM) describes schemes to subdivide the frequency dimension into several non-overlapping frequency bands.

Frequency Division Multiple Access is a method employed to permit several users to transmit simultaneously on one satellite transponder by assigning a specific frequency within the channel to each user. Each conversation gets its own, unique, radio channel. The channels are relatively narrow, usually 30 KHz or less and are defined as either transmit or receive channels. A full duplex conversation requires a transmit& receive channel pair. FDM is often used for simultaneous access to the medium by base station and mobile station in cellular networks establishing a duplex channel. A scheme called frequency division duplexing (FDD) in which the two directions, mobile station to base station and vice versa are now separated using different frequencies.

The two frequencies are also known as uplink, i.e., from mobile station to base station or from ground control to satellite, and as downlink, i.e., from base station to mobile station or from satellite to ground control. The basic frequency allocation scheme for GSM is fixed and regulated by national authorities. All uplinks use the band between 890.2 and 915 MHz, all downlinks use 935.2 to 960 MHz. According to FDMA, the base station, shown on the right side, allocates a certain frequency for up- and downlink to establish a duplex channel with a mobile phone. Up- and downlink have a fixed relation. If the uplink frequency is fu = 890 MHz n·0.2 MHz, the downlink frequency is fd = fu 45 MHz,

i.e., fd = 935 MHz n·0.2 MHz for a certain channel n. The base station selects the channel. Each channel (uplink and downlink) has a bandwidth of 200 kHz.

This scheme also has disadvantages. While radio stations broadcast 24 hours a day, mobile communication typically takes place for only a few minutes at a time. Assigning a separate frequency for each possible communication scenario would be a tremendous waste of (scarce) frequency resources. Additionally, the fixed assignment of a frequency to a sender makes the scheme very inflexible and limits the number of senders.

FDMA/TDD in CT2

Using FDMA, CT2 system splits the available bandwidth into radio channels in the assigned frequency domain. In the initial call setup, the handset scans the available channels and locks on to an unoccupied channel for the duration of the call. Using TDD(Time Division Duplexing ), the call is split into time blocks that alternate between transmitting and receiving.

 

 

 

FDMA and Near-Far Problem

The near-far problem is one of detecting or filtering out a weaker signal amongst stronger signals. The near-far problem is particularly difficult in CDMA systems where transmitters share transmission frequencies and transmission time. In contrast,

FDMA and TDMA systems are less vulnerable. FDMA systems offer different kinds of solutions to near-far challenge. Here, the worst case to consider is recovery of a weak signal in a frequency slot next to strong signal. Since both signals are present simultaneously as a composite at the input of a gain stage, the gain is set according to the level of the stronger signal; the weak signal could be lost in the noise floor. Even if subsequent stages have a low enough noise floor to provide

 

 Introduction: Frequency division multiplexing (FDM) describes schemes to subdivide the frequency dimension into several non-overlapping frequency bands.

Frequency Division Multiple Access is a method employed to permit several users to transmit simultaneously on one satellite transponder by assigning a specific frequency within the channel to each user. Each conversation gets its own, unique, radio channel. The channels are relatively narrow, usually 30 KHz or less and are defined as either transmit or receive channels. A full duplex conversation requires a transmit& receive channel pair. FDM is often used for simultaneous access to the medium by base station and mobile station in cellular networks establishing a duplex channel. A scheme called frequency division duplexing (FDD) in which the two directions, mobile station to base station and vice versa are now separated using different frequencies.

The two frequencies are also known as uplink, i.e., from mobile station to base station or from ground control to satellite, and as downlink, i.e., from base station to mobile station or from satellite to ground control. The basic frequency allocation scheme for GSM is fixed and regulated by national authorities. All uplinks use the band between 890.2 and 915 MHz, all downlinks use 935.2 to 960 MHz. According to FDMA, the base station, shown on the right side, allocates a certain frequency for up- and downlink to establish a duplex channel with a mobile phone. Up- and downlink have a fixed relation. If the uplink frequency is fu = 890 MHz n·0.2 MHz, the downlink frequency is fd = fu 45 MHz,

i.e., fd = 935 MHz n·0.2 MHz for a certain channel n. The base station selects the channel. Each channel (uplink and downlink) has a bandwidth of 200 kHz.

This scheme also has disadvantages. While radio stations broadcast 24 hours a day, mobile communication typically takes place for only a few minutes at a time. Assigning a separate frequency for each possible communication scenario would be a tremendous waste of (scarce) frequency resources. Additionally, the fixed assignment of a frequency to a sender makes the scheme very inflexible and limits the number of senders.

FDMA/TDD in CT2

Using FDMA, CT2 system splits the available bandwidth into radio channels in the assigned frequency domain. In the initial call setup, the handset scans the available channels and locks on to an unoccupied channel for the duration of the call. Using TDD(Time Division Duplexing ), the call is split into time blocks that alternate between transmitting and receiving.

 

 

 

FDMA and Near-Far Problem

The near-far problem is one of detecting or filtering out a weaker signal amongst stronger signals. The near-far problem is particularly difficult in CDMA systems where transmitters share transmission frequencies and transmission time. In contrast,

FDMA and TDMA systems are less vulnerable. FDMA systems offer different kinds of solutions to near-far challenge. Here, the worst case to consider is recovery of a weak signal in a frequency slot next to strong signal. Since both signals are present simultaneously as a composite at the input of a gain stage, the gain is set according to the level of the stronger signal; the weak signal could be lost in the noise floor. Even if subsequent stages have a low enough noise floor to provide

 

 Introduction: The IP addresses are designed to work with stationary hosts because part of the address defines the network to which the host is attachedA host cannot change its IP address without terminating on-going sessions and restarting them after it acquires a new address. Other link layer mobility solutions exist but are not sufficient enough for the global Internet.

  • Mobility is the abilityof a node to change its point-of-attachment while maintaining all existing communications and using the same IP address.
  • Nomadicityallows a node to move but it must terminate all existing communications and then can initiate new connections with a new address.

Mobile IP is a network layer solution for homogenous and heterogeneous mobility on the global Internet which is scalable, robust, and secure and which allows nodes to maintain all ongoing communications while moving.

Design Goals: Mobile IP was developed as a means for transparently dealing with problems of mobile users. Mobile IP was designed to make the size and the frequency of required routing updates as small as possible. It was designed to make it simple to implement mobile node software. It was designed to avoid solutions that require mobile nodes to use multiple addresses.

Requirements: There are several requirements for Mobile IP to make it as a standard. Some of them are:

1.  Compatibility:The whole architecture of internet is very huge and a new standard cannot introduce changes to the applications or network protocols already in use. Mobile IP is to be integrated into the existing operating systems. Also, for routers also it may be possible to enhance its capabilities to support mobility instead of changing the routers which is highly impossible. Mobile IP must not require special media or MAC/LLC protocols, so it must use the same interfaces and mechanisms to access the lower layers as IP does. Finally, end-systems enhanced with a mobile IP implementation should still be able to communicate with fixed systems without mobile IP.

2. Transparency:Mobility remains invisible for many higher layer protocols and applications. Higher layers continue to work even if the mobile computer has changed its point of attachment to the network and even notice a lower bandwidth and some interruption in the service. As many of today’s applications have not been designed to use in mobile environments, the effects of mobility will be higher delay and lower bandwidth.

3. Scalability and efficiency: The efficiency of the network should not be affected even if a new mechanism is introduced into the internet. Enhancing IP for mobility must not generate many new messages flooding the whole network. Special care is necessary to be taken considering the lower bandwidth of wireless links. Many mobile systems have a wireless link to an attachment point. Therefore, only some additional packets must be necessary between a mobile system and a node in the network. It is indispensable for a mobile IP to be scalable over a large number of participants in the whole internet, throughout the world.

4. Security:Mobility possesses many security problems. A minimum requirement is the authentication of all messages related to the management of mobile IP. It must be sure for the IP layer if it forwards a packet to a mobile host that this host really is the receiver of the packet. The IP layer can only guarantee that the IP address of the receiver is correct. There is no way to prevent faked IP addresses and other attacks.

The goal of a mobile IP can be summarized as: ‘supporting end-system mobility while maintaining scalability, efficiency, and compatibility in all respects with existing applications and Internet protocols’.

 

 Introduction: Both I-TCP and Snooping TCP does not help much, if a mobile host gets disconnected. The M-TCP (mobile TCP) approach has the same goals as I-TCP and snooping TCP: to prevent the sender window from shrinking if bit errors or disconnection but not congestion cause current problems. M-TCP wants to improve overall throughput, to lower the delay, to maintain end-to-end semantics of TCP, and to provide a more efficient handover. Additionally, M-TCP is especially adapted to the problems arising from lengthy or frequent disconnections. M-TCP splits the TCP connection into two parts as I-TCP does. An unmodified TCP is used on the standard host-supervisory host (SH) connection, while an optimized TCP is used on the SH-MH connection.

The SH monitors all packets sent to the MH and ACKs returned from the MH. If the SH does not receive an ACK for some time, it assumes that the MH is disconnected. It then chokes the sender by setting the sender’s window size to 0. Setting the window size to 0 forces the sender to go into persistent mode, i.e., the state of the sender will not change no matter how long the receiver is disconnected. This means that the sender will not try to retransmit data. As soon as the SH (either the old SH or a new SH) detects connectivity again, it reopens the window of the sender to the old value. The sender can continue sending at full speed. This mechanism does not require changes to the sender’s TCP. The wireless side uses an adapted. TCP that can recover from packet loss much faster. This modified TCP does not use slow start, thus, M-TCP needs a bandwidth manager to implement fair sharing over the wireless link.

Advantages of M-TCP:

  It maintains the TCP end-to-end semantics. The SH does not send any ACK itself but forwards the ACKs from the MH.

  If the MH is disconnected, it avoids useless retransmissions, slow starts or breaking connections by simply shrinking the sender’s window to 0.

  As no buffering is done as in I-TCP, there is no need to forward buffers to a new SH. Lost packets will be automatically retransmitted to the SH.

Disadvantages of M-TCP:

As the SH does not act as proxy as in I-TCP, packet loss on the wireless link due to bit errors is propagated to the sender. M-TCP assumes low bit error rates, which is not always a valid assumption.

A modified TCP on the wireless link not only requires modifications to the MH protocol software but also new network elements like the bandwidth manager.

 

 

Introduction: Assume an application running on the mobile host that sends a short request to a server from time to time, which responds with a short message and it requires reliable TCP transport of the packets. For it to use normal TCP, it is inefficient because of the overhead involved. Standard TCP is made up of three phases: setup, data transfer and release. First, TCP uses a three-way handshake to establish the connection. At least one additional packet is usually needed for transmission of the request, and requires three more packets to close the connection via a three-way handshake.

So, for sending one data packet, TCP may need seven packets altogether. This kind of overhead is acceptable for long sessions in fixed networks, but is quite inefficient for short messages or sessions in wireless networks. This led to the development of transaction-oriented TCP (T/TCP).

T/TCP can combine packets for connection establishment and connection release with user data packets. This can reduce the number of packets down to two instead of seven. The obvious advantage for certain applications is the reduction in the overhead which standard TCP has for connection setup and connection release. Disadvantage is that it requires changes in the software in mobile hostand all correspondent hosts. This solution does not hide mobility anymore. Also, T/TCP exhibits several security problems.

 

 

 Introduction: Routing in Mobile Ad hoc networks is an important issue as these networks do not have fixed infrastructure and routing requires distributed and cooperative actions from all nodes in the network. MANET’s provide point to point routing similar to Internet routing. The major difference between routing in MANET and regular internet is the route discovery mechanism. Internet routing protocols such as RIP or OSPF have relatively long converge times, which is acceptable for a wired network that has infrequent topology changes.

1.       Based on the information used to build routing tables :

Shortest distance algorithms:algorithms that use distance information to build routing tables.

Link state algorithms: algorithms that use connectivity information to build a topology graph that is used to build routing tables.

2.       Based on when routing tables are built:

Proactive algorithms: maintain routes to destinations even if they are not needed. Some of the examples are Destination Sequenced Distance Vector (DSDV), Wireless Routing Algorithm (WRP), Global State Routing (GSR), Source-tree Adaptive Routing (STAR), Cluster-Head Gateway Switch Routing (CGSR), Topology Broadcast Reverse Path Forwarding (TBRPF), Optimized Link State Routing (OLSR) etc.

Always maintain routes:- Little or no delay for route determination

  • Consume bandwidth to keep routes up-to-date
  • Maintain routes which may never be used
  • Advantages: low route latency, State information, QoS guarantee related to connection set-up or other real-time requirements
  • Disadvantages: high overhead (periodic updates) and route repair depends on update frequency

Reactive algorithms: maintain routes to destinations only when they are needed. Examples are Dynamic Source Routing (DSR), Ad hoc-On demand distance Vector (AODV), Temporally ordered Routing Algorithm (TORA), Associativity-Based Routing (ABR) etc

  • only obtain route information when needed
  • Advantages: no overhead from periodic update, scalability as long as there is only light traffic and low mobility.
  • Disadvantages: high route latency, route caching can reduce latency

Hybrid algorithms: maintain routes to nearby nodes even if they are not needed and maintain routes to far away nodes only when needed. Example is Zone Routing Protocol (ZRP).

Which approach achieves a better trade-off depends on the traffic and mobility patterns.

 Introduction: The Wireless Application Protocol (WAP) is an open, global specification that empowers mobile users with wireless devices to easily access and interact with information and services instantly.

WAP is a global standard and is not controlled by any single company. Ericsson, Nokia, Motorola, and Unwired Planet founded the WAP Forum in the summer of 1997 with the initial purpose of defining an industry-wide specification for developing applications over wireless communications networks. The WAP specifications define a set of protocols in application, session, transaction, security, and transport layers, which enable operators, manufacturers, and applications providers to meet the challenges in advanced wireless service differentiation and fast/flexible service creation.

All solutions must be:

1.       interoperable, i.e., allowing terminals and software from different vendors to communicate with networks from different providers

2.       scalable, i.e., protocols and services should scale with customer needs and number of customers

3.       efficient,i.e., provision of QoS suited to the characteristics of the wireless and mobile networks

4.       reliable,i.e., provision of a consistent and predictable platform for deploying services; and

5.       Secure, i.e., preservation of the integrity of user data, protection of devices and services from security problems.

Uses of  WAP:

In the past, wireless Internet access has been limited by the capabilities of handheld devices and wireless networks.

WAP utilizes Internet standards such as XML, user datagram protocol (UDP), and Internet protocol (IP). Many of the protocols are based on Internet standards such as hypertext transfer protocol (HTTP) and TLS but have been optimized for the unique constraints of the wireless environment: low bandwidth, high latency, and less connection stability.

Internet standards such as hypertext markup language (HTML), HTTP, TLS and transmission control protocol (TCP) are inefficient over mobile networks, requiring large amounts of mainly text-based data to be sent. Standard HTML content cannot be effectively displayed on the small-size screens of pocket-sized mobile phones and pagers.

WAP utilizes binary transmission for greater compression of data and is optimized for long latency and low bandwidth. WAP sessions cope with intermittent coverage and can operate over a wide variety of wireless transports.

WML and wireless markup language script (WML Script) are used to produce WAP content. They make optimum use of small displays, and navigation may be performed with one hand. WAP content is scalable from a two-line text display on a basic device to a full graphic screen on the latest smart phones and communicators.

The lightweight WAP protocol stack is designed to minimize the required bandwidth and maximize the number of wireless network types that can deliver WAP content. Multiple networks will be targeted, with the additional aim of targeting multiple networks. These include global system for mobile communications (GSM) 900, 1,800, and 1,900 MHz; interim standard (IS)–136; digital European cordless communication (DECT); time-division multiple access (TDMA), personal communications service (PCS), FLEX, and code division multiple access (CDMA). All network technologies and bearers will also be supported, including short message service (SMS), USSD, circuit-switched cellular data (CSD), cellular digital packet data (CDPD), and general packet radio service (GPRS).

As WAP is based on a scalable layered architecture, each layer can develop independently of the others. This makes it possible to introduce new bearers or to use new transport protocols without major changes in the other layers.

WAP will provide multiple applications, for business and customer markets such as banking, corporate database access, and a messaging interface.

 

Introduction: "Bluetooth" was the nickname of HaraldBlåtland II, king of Denmark from 940 to 981, who united all of Denmark and part of Norway under his rule. Bluetooth is a proprietary open wireless technology standard for exchanging data over short distances (using short wavelength radio transmissions in the ISM band from 2400-2480 MHz) from fixed and mobile devices, creating personal area networks (PANs) with high levels of security. The Bluetooth technology aims at so-called ad-hoc piconets, which are local area networks with a very limited coverage and without the need for an infrastructure.

Bluetooth Features

  1. Bluetooth is wireless and automatic. You don't have to keep track of cables, connectors, and connections, and you don't need to do anything special to initiate communications. Devices find each other automatically and start conversing without user input, expect where authentication is required; for example, users must log in to use their email accounts.
  2. Bluetooth is inexpensive. Market analysts peg the cost to incorporate Bluetooth technology into a PDA, cell phone, or other product at a minimum cost.
  3. The ISM band that Bluetooth uses is regulated, but unlicensed. Governments have converged on a single standard, so it's possible to use the same devices virtually wherever you travel, and you don't need to obtain legal permission in advance to begin using the technology.
  4. Bluetooth handles both data and voice. Its ability to handle both kinds of transmissions simultaneously makes possible such innovations as a mobile hands-free headset for voice with applications that print to fax, and that synchronize the address books on your PDA, your laptop, and your cell phone.
  5. Signals are omni-directional and can pass through walls and briefcases. Communicating devices don't need to be aligned and don't need an unobstructed line of sight like infrared.
  6. Bluetooth uses frequency hopping. Its spread spectrum approach greatly reduces the risk that communications will be intercepted.

Bluetooth Applications

1.  File transfer.

2. Ad-hoc networking: Communicating devices can spontaneously form a community of networks that persists only as long as it's needed

3. Device synchronization: Seamless connectivity among PDAs, computers, and mobile phones allows applications to update information on multiple devices automatically when data on any one device changes.

4. Peripheral connectivity.

5. Car kits: Hands-free packages enable users to access phones and other devices without taking their hands off the steering wheel

6. Mobile payments: Your Bluetooth-enabled phone can communicate with a Bluetooth-enabled vending machine to buy a can of Diet Pepsi, and put the charge on your phone bill.

The 802.11b protocol is designed to connect relatively large devices with lots of power and speed, such as desktops and laptops, where devices communicate at up to 11 Mbit/sec, at greater distances (up to 300 feet, or 100 meters). By contrast, Bluetooth is designed to connect small devices like PDAs, mobile phones, and peripherals at slower speeds (1 Mbit/sec), within a shorter range (30 feet, or 10 meters), which reduces power requirements. Another major difference is that 802.11b wasn't designed for voice communications, while any Bluetooth connection can support both data and voice communications. 

Introduction: Sun Microsystems defines J2ME as "a highly optimized Java run-time environment targeting a wide range of consumer products, including pagers, cellular phones, screen-phones, digital set-top boxes and car navigation systems." J2ME brings the cross-platform functionality of the Java language to smaller devices, allowing mobile wireless devices to share applications.

Java 2 Micro Edition maintains the qualities that Java technology has become known for:

  • built-in consistency across products in terms of running anywhere, anytime, on any device
  • the power of a high-level object-oriented programming language with a large developer base;
  • portability of code;
  • safe network delivery; and
  • upward scalability with J2SE and J2EE

While connected consumer devices such as cell phones, pagers, personal organizers and set-top boxes have many things in common, they are also diverse in form, function and features. Information appliances tend to be special-purpose, limited-function devices. To address this diversity, an essential requirement for J2ME is not only small size but also modularity and customizability. The J2ME architecture is modular and scalable so that it can support the kinds of flexible deployment demanded by the consumer and embedded markets. To support this kind of customizability and extensibility, two essential concepts are defined by J2ME:

Configuration. A J2ME configuration defines a minimum platform for a “horizontal” category or grouping of devices, each with similar requirements on total memory budget and processing power. A configuration defines the Java language and virtual machine features and minimum class libraries that a device manufacturer or a content provider can expect to be available on all devices of the same category.

Profile. A J2ME profile is layered on top of (and thus extends) a configuration. A profile addresses the specific demands of a certain “vertical” market segment or device family. The main goal of a profile is to guarantee interoperability within a certain vertical device family or domain by defining a standard Java platform for that market. Profiles typically includeclass libraries that are far more domain-specific than the class libraries provided in aconfiguration.

 

Introduction: A configuration is a subset of profile. A configuration defines a Java platform for a “horizontal” category or grouping of devices with similar requirements on total memory budget and other hardware capabilities.

More specifically, a configuration:

  • specifies the Java programming language features supported,
  • specifies the Java virtual machine features supported,
  • Specifies the basic Java libraries and APIs supported.

To avoid fragmentation, there will be a very limited number of J2ME configurations. Currently, the goal is to define two standard J2ME configurations:

  • Connected, Limited Device Configuration (CLDC). The market consisting of personal, mobile, connected information devices is served by the CLDC. This configuration includes some new classes, not drawn from the J2SE APIs, designed specifically to fit the needs of small-footprint devices. It is used specifically with the KVM for 16-bit or 32-bit devices with limited amounts of memory. This is the configuration (and the virtual machine) used for developing small J2ME applications.
  • Connected Device Configuration (CDC). The market consisting of shared, fixed, connected information devices is served by the Connected Device Configuration (CDC). To ensure upward compatibility between configurations, the CDC shall be a superset of the CLDC. Thisis used with the C virtual machine (CVM) and is used for 32-bit architectures requiring morethan 2 MB of memory.

 

 Introduction: The J2ME framework provides the concept of a profile to make it possible to define Java platforms for specific vertical markets. Profiles can serve two distinct portability requirements:

  • A profile provides a complete toolkit for implementing applications for a particular kind of device, such as a pager, set-top box, cell phone, washing machine, or interactive electronic toy.
  • A profile may also be created to support a significant, coherent group of applications that might be hosted on several categories of devices.

Foundation profile contains APIs of J2SE without GUIs. PersonalProfile is profile for embedded devices. Two profiles have been defined for J2ME and are built on CLDC: KJava and Mobile Information Device Profile (MIDP). These profiles are geared toward smaller devices.

MIDP 3.0 is the latest profile version, which is a profile for special-featured phones and handheld devices. It provides improved UI’s, UI extensibility and interoperability between the devices. It supports multiple network interfaces in a device, IPv6, large display devices and high performance games. Development tools are used to develop MIDP applications. MIDP applications are composed of two parts:

  • JAR File – Contains all of the classes and resources used by the application
  • JAD File – Application descriptor, describes how to run the MIDP application

K Virtual Machine

The KVM is a compact, portable Java virtual machine specifically designed from the ground up for small, resource-constrained devices. The high-level design goal for the KVM was to create the smallest possible “complete” Java virtual machine that would maintain all the central aspects of the Java programming language, but would run in a resource-constrained device with only a few hundred kilobytes total memory budget. More specifically, the KVM was designed to be:

  • small, with a static memory footprint of the virtual machine core in the range of 40 kilobytes to 80 kilobytes (depending on compilation options and the target platform,)
  • clean, well-commented, and highly portable,
  • Modular and customizable, as “complete” and “fast” as possible without sacrificing the other design goals.

The “K” in KVM stands for “kilo.” It was so named because its memory budget is measured in kilobytes (whereas desktop systems are measured in megabytes). KVM is suitable for 16/32-bit RISC/CISC microprocessors with a total memory budget of no more than a few hundred kilobytes (potentially less than 128 kilobytes). This typically applies to digital cellular phones, pagers, personal organizers, and small retail payment terminals. 


Introduction: The power of the radio signals transmitted by the BS decay as the signals travel away from it. A minimum amount of signal strength (let us say, x dB) is needed in order to be detected by the MS or mobile sets which may the hand-held personal units or those installed in the vehicles. The region over which the signal strength lies above this threshold value x dB is known as the coverage area of a BS and it must be a circular region, considering the BS to be isotropic radiator. Such a circle, which gives this actual radio coverage, is called the foot print of a cell (in reality, it is amorphous).

It might so happen that either there may be an overlap between any two such side by side circles or there might be a gap between the coverage areas of two adjacent circles. This is shown in Figure 3.1. Such a circular geometry, therefore, cannot serve as a regular shape to describe cells. We need a regular shape for cellular design over a territory which can be served by 3 regular polygons, namely, equilateral triangle, square and regular hexagon, which can cover the entire area without any overlap and gaps. Along with its regularity, a cell must be designed such that it is most reliable too, i.e., it supports even the weakest mobile with occurs at the edges of the cell. For any distance between the center and the farthest point in the cell from it, a regular hexagon covers the maximum area. Hence regular hexagonal geometry is used as the cells in mobile communication. 


Introduction: Susceptibility and interference problems associated with mobile communications equipment are because of the problem of time congestion within the electromagnetic spectrum. It is the limiting factor in the performance of cellular systems. This interference can occur from clash with another mobile in the same cell or because of a call in the adjacent cell. There can be interference between the base stations operating at same frequency band or any other non-cellular system's energy leaking inadvertently into the frequency band of the cellular system. If there is an interference in the voice channels, cross talk is heard will appear as noise between the users.

The interference in the control channels leads to missed and error calls because of digital signaling. Interference is more severe in urban areas because of the greater RF noise and greater density of mobiles and base stations. The interference can be divided into 2 parts: co-channel interference and adjacent channel interference.

Co-channel interference (CCI):

For the efficient use of available spectrum, it is necessary to reuse frequency bandwidth over relatively small geographical areas. However, increasing frequency reuse also increases interference, which decreases system capacity and service quality. The cells where the same set of frequencies is used are call co-channel cells. Co-channel interference is the cross talk between two different radio transmitters using the same radio frequency as is the case with the co-channel cells. The reasons of CCI can be because of either adverse weather conditions or poor frequency planning or overly crowded radio spectrum.

If the cell size and the power transmitted at the base stations are same then CCI will become independent of the transmitted power and will depend on radius of the cell (R) and the distance between the interfering co-channel cells (D). If D/R ratio is increased, then the effective distance between the co-channel cells will increaseand interference will decrease. The parameter Q is called the frequency reuse ratio and is related to the cluster size. For hexagonal geometry

D/R =3N

From the above equation, small of `Q' means small value of cluster size `N' and increase in cellular capacity. But large `Q' leads to decrease in system capacity but increase in transmission quality. Choosing the options is very careful for the selection of `N', the proof of which is given in the first section. The Signal to Interference Ratio (SIR) for a mobile receiver which monitors the forward channel can be calculated as

where i0 is the number of co-channel interfering cells, S is the desired signal power from the baseband station and Ii is the interference power caused by the i-th interfering co-channel base station. In order to solve this equation from power calculations, we need to look into the signal power characteristics. The average power in the mobile radio channel decays as a power law of the distance of separation between transmitter and receiver. The expression for the received power Pr at a distance dcan be approximately calculated as

and in the dB expression as

where P0 is the power received at a close-in reference point in the far field region at a small distance do from the transmitting antenna, and `n' is the path loss exponent. Let us calculate the SIR for this system. If Di is the distance of the i-th interferer from the mobile, the received power at a given mobile due to i-th interfering cell is proportional to (Di)-n(the value of 'n' varies between 2 and 4 in urban cellular systems). 

Introduction: The increased number of handoffs required when sectoring is employed results in an increased load on the switching and control link elements of the mobile system. To overcome this problem, a new microcell zone concept has been proposed. As shown in Figure 3.10, this scheme has a cell divided into three microcell zones, with each of the three zone sites connected to the base station and sharing the same radio equipment. It is necessary to note that all the microcell zones, within a cell, use the same frequency used by that cell; that is no handovers occur between microcells.

Thus when a mobile user moves between two microcell zones of the cell, the BS simply switches the channel to a different zone site and no physical re-allotment of channel takes place.

Locating the mobile unit within the cell:An active mobile unit sends a signal to all zone sites, which in turn send a signal to the BS. A zone selector at the BS uses that signal to select a suitable zone to serve the mobile unit - choosing the zone with the strongest signal.

Base Station Signals: When a call is made to a cellular phone, the system already knows the cell location of that phone. The base station of that cell knows in which zone, within that cell, the cellular phone is located. Therefore when it receives the signal, the base station transmits it to the suitable zone site. The zone site receives the cellular signal from the base station and transmits that signal to the mobile phone after amplification. By confining the power transmitted to the mobile phone, co-channel interference is reduced between the zones and the capacity of system is increased.

Benefits of the micro-cell zone concept:

1.       Interference is reduced in this case as compared to the scheme in which the cell size is reduced.

2.       Handoffs are reduced (also compared to decreasing the cell size) since the microcells within the cell operate at the same frequency; no handover occurs when the mobile unit moves between the microcells.

3.       Size of the zone apparatus is small. The zone site equipment being small can be mounted on the side of a building or on poles.

4.       System capacity is increased. The new microcell knows where to locate the mobile unit in a particular zone of the cell and deliver the power to that zone. Sincethe signal power is reduced, the microcells can be closer and result in an increased system capacity. However, in a microcellular system, the transmitted power to a mobile phone within a microcell has to be precise; too much power results in interference between microcells, while with too little power the signal might not reach the mobile phone.This is a drawback of microcellular systems, since a change in the surrounding (a new building, say, within a microcell) will require a change of the transmission power. 



 Introduction: ISI has been identified as one of the major obstacles to high speed data transmission over mobile radio channels. If the modulation bandwidth exceeds the coherence bandwidth of the radio channel (i.e., frequency selective fading), modulation pulses are spread in time, causing ISI. An equalizer at the front end of a receiver compensates for the average range of expected channel amplitude and delay characteristics.

As the mobile fading channels are random and time varying, equalizers must track the time-varying characteristics of the mobile channel and therefore should be time varying or adaptive. An adaptive equalizer has two phases of operation: training and tracking. These are as follows.

Training Mode:

  • Initially a known, fixed length training sequence is sent by the transmitter so that the receiver equalizer may average to a proper setting.
  • Training sequence is typically a pseudo-random binary signal or a fixed, of prescribed bit pattern.
  • The training sequence is designed to permit an equalizer at the receiver to acquire the proper filter coefficient in the worst possible channel condition. An adaptive filter at the receiver thus uses a recursive algorithm to evaluatethe channel and estimate filter coefficients to compensate for the channel

Tracking Mode:

  • When the training sequence is finished the filter coefficients are near optimal.
  • Immediately following the training sequence, user data is sent.
  • When the data of the users are received, the adaptive algorithms of the equalizer track the changing channel.
  • As a result, the adaptive equalizer continuously changes the filter characteristicsover time.

A Mathematical Framework

The signal received by the equalizer is given by

where d(t) is the transmitted signal, h(t) is the combined impulse response of the transmitter, channel and the RF/IF section of the receiver and nb (t) denotes the baseband noise.

If the impulse response of the equalizer is heq (t), the output of the equalizer is

However, the desired output of the equalizer is d(t) which is the original source data. Assuming nb (t)=0, we can write y(t) = d(t), which in turn stems the following equation:

The main goal of any equalization process is to satisfy this equation optimally. In frequency domain it can be written as

which indicates that an equalizer is actually an inverse filter of the channel. If the channel is frequency selective, the equalizer enhances the frequency components with small amplitudes and attenuates the strong frequencies in the received frequency spectrum in order to provide a, composite received frequency response and linear phase response. For a time varying channel, the equalizer is designed to track the channel variations so that the above equation is approximately satisfyed.

 

Introduction:

1. Space Diversity: A method of transmission or reception, or both, in which the effects of fading are minimized by the simultaneous use of two or more physically separated antennas, ideally separated by one half or more wavelengths. Signals received from spatially separated antennas have uncorrelated envelopes.

Space diversity reception methods can be classified into four categories: selection, feedback or scanning, maximal ratio combining and equal gain combining.

(a) Selection Diversity:

The basic principle of this type of diversity is selecting the best signal among all the signals received from different branches at the receiving end. Selection Diversity is the simplest diversity technique. Figure 7.3 shows a block diagram of this method where 'M' demodulators are used to provide M diversity branches whose gains are adjusted to provide the same average SNR for each branch. The receiver branches having the highest instantaneous SNR are connected to the demodulator.

Let M independent Rayleigh fading channels are available at a receiver. Each channel is called a diversity branch and let each branch has the same average SNR. The signal to noise ratio is defined as

whereEb is the average carrier energy, N0 is the noise PSD, ?is a random variable used to represent amplitude values of the fading channel.

The instantaneous SNR(γi) is usually defined asγi= instantaneous signal power per branch/mean noise power per branch. For Rayleigh fading channels, αhas a Rayleigh distribution and so α2and consequently γihave a chi-square distribution with two degrees of freedom. The probability density function for such a channel is

The probability that any single branch has an instantaneous SNR less than some defined threshold γis

The average SNR,? can be then expressed as

where x = γ/Γand Γis the average SNR for a single branch, when no diversity is used.

This equation shows an average improvement in the link margin without requiring extra transmitter power or complex circuitry, and it is easy to implement as it needed a monitoring station and an antenna switch at the receiver. It is not an optimal diversity technique as it doesn't use all the possible branches simultaneously. 

Introduction: Specific waveforms are required to represent a zero and a one uniquely so that a sequence of bits is coded into electrical pulses. This is known as line coding. There are various ways to accomplish this and the different forms are summarized below.

1. Non-return to zero level (NRZ-L): 1 forces aa high while 0 forces a low.

2. Non-return to zero mark (NRZ-M): 1 forces negative and positive transitions while 0 causes no transitions.

3. Non-return to zero space (NRZ-S): 0 forces negative and positive transitions while 1 causes no transitions.

4. Return to zero (RZ): 1 goes high for half a period while 0 remains at zero state.

5. Bi-phase-L: Manchester 1 forces positive transition while 0 forces negative transition. In case of consecutive bits of same type a transition occurs in the beginning of the bit period.

6. Bi-phase-M: There is always a transition in the beginning of a bit interval. 1 forces a transition in the middle of the bit while 0 does nothing.

7.Bi-phase-S: There is always a transition in the beginning of a bit interval. 0 forces a transition in the middle of the bit while 1 does nothing.

8. Differential Manchester: There is always a transition in the middle of a bit interval. 0 forces a transition in the beginning of the bit while 1 does nothing.

9. Bipolar/Alternate mark inversion (AMI): 1 forces a positive or negative pulse for half a bit period and they alternate while 0 does nothing.

All these schemes are shown in Figure 6.5. 

Introduction: 3G is the third generation of mobile phone standards and technology, superseding 2.5G. It is based on the International Telecommunication Union (ITU) family of standards under the International Mobile Telecommunications-2000 (IMT-2000).

ITU launched IMT-2000 program, which, together with the main industry and standardization bodies worldwide, targets to implement a global frequency band that would support a single, ubiquitous wireless communication standard for all countries, to provide the framework for the definition of the 3G mobile systems. Several radio access technologies have been accepted by ITU as part of the IMT-2000 framework.

3G networks enable network operators to over users a wider range of more advanced services while achieving greater network capacity through improved spectral efficiency. Services include wide-area wireless voice telephony, video calls, and broadband wireless data, all in a mobile environment. Additional features also include HSPA data transmission capabilities able to deliver speeds up to 14.4Mbit/s on the down link and 5.8Mbit/s on the uplink.

3G networks are wide area cellular telephone networks which evolved to incorporate high-speed internet access and video telephony. IMT-2000 denes a set of technical requirements for the realization of such targets, which can be summarized as follows:

1.       high data rates: 144 kbps in all environments and 2 Mbps in low-mobility and indoor environments

2.       symmetrical and asymmetrical data transmission

3.       circuit-switched and packet-switched-based services

4.       speech quality comparable to wire-line quality

5.       improved spectral efficiency

6.       several simultaneous services to end users for multimedia services

7.       seamless incorporation of second-generation cellular systems

8.       global roaming

9.       Open architecture for the rapid introduction of new services and technology.

3G Standards and Access Technologies:

As mentioned before, there is several different radio access technologies definedwithin ITU, based on either CDMA or TDMA technology. An organization called3rd Generation Partnership Project (3GPP) has continued that work by defining amobile system that fulfills the IMT-2000 standard. This system is called UniversalMobile Telecommunications System (UMTS). After trying to establish a single 3Gstandard, ITU finally approved a family of five 3G standards, which are part of the3G framework known as IMT-2000:

  •  W-CDMA
  •  CDMA2000
  •  TD-SCDMA

Europe, Japan, and Asia have agreed upon a 3G standard called the Universal Mobile Telecommunications System (UMTS), which is WCDMA operating at 2.1 GHz. UMTS and WCDMA are often used as synonyms. In the USA and other parts of America, WCDMA will have to use another part of the radio spectrum. 

Introduction: CDMA senders and receivers are not really simple devices. Communicating with devices requires programming of the receiver to be able to decode different codes. Aloha was a very simple scheme, but could only provide a relatively low bandwidth due to collisions. SAMA uses spread spectrum with only one single code (chipping sequence) for spreading for all senders accessing according to aloha.

In SAMA, each sender uses the same spreading code, for ex 110101 as shown below. Sender A and B access the medium at the same time in their narrowband spectrum, so that the three bits shown causes collisions. The same data could also be sent with higher power for shorter periods as show.

The main problem in using this approach is finding good chipping sequences. The maximum throughput is about 18 per cent, which is very similar to Aloha, but the approach benefits from the advantages of spread spectrum techniques: robustness against narrowband interference and simple coexistence with other systems in the same frequency bands.

Comparison SDMA/TDMA/FDMA/CDMA: 


 


  

 Introduction: Assume that a device needs a data-record during an application. A request must be sent to the server for the data record (this mechanism is called pulling). The time taken for the application software to access a particular record is known as access latency. Caching and hoarding the record at the device reduces access latency to zero. Therefore, data cache maintenance is necessary in a mobile environment to overcome access latency.

  1. Data cache inconsistency means that data records cached for applications are not invalidated at the device when modified at the server but not modified at the device. Data cache consistency can be maintained by the three methods given below:
  2. Cache invalidation mechanism (server-initiated case): the server sends invalidation reports on invalidation of records (asynchronous) or at regular intervals (synchronous).
  3. Polling mechanism (client-initiated case): Polling means checking from the server, the state of data record whether the record is in the valid, invalid, modified, or exclusive state. Each cached record copy is polled whenever required by the application software during computation. If the record is found to be modified or invalidated, then the device requests for the modified data and replaces the earlier cached record copy.
  4. Time-to-live mechanism (client-initiated case): Each cached record is assigned a TTL (time-to-live). The TTL assignment is adaptive (adjustable) previous update intervals of that record. After the end of the TTL, the cached record copy is polled. If it is modified, then the device requests the server to replace the invalid cached record with the modified data. When TTL is set to 0, the TTL mechanism is equivalent to the polling mechanism.

Web Cache Maintenance in Mobile Environments:

The mobile devices or their servers can be connected to a web server (e.g., traffic information server or train information server). Web cache at the device stores the web server data and maintains it in a manner similar to the cache maintenance for server data described above. If an application running at the device needs a data record from the web which is not at the web cache, then there is access latency. Web cache maintenance is necessary in a mobile environment to overcome access latency in downloading from websites due to disconnections. Web cache consistency can be maintained by two methods. These are:

Time-to-live (TTL) mechanism (client-initiated case): The method is identical to the one discussed for data cache maintenance.

Power-aware computing mechanism (client-initiated case): Each web cache maintained at the device can also store the CRC (cyclic redundancy check) bits. Assume that there are N cached bits and n CRC bits. N is much greater than n. Similarly at the server, n CRC bits are stored. As long as there is consistency between the server and device records, the CRC bits at both are identical. Whenever any of the records cached at the server is modified, the corresponding CRC bits at the server are also modified. After the TTL expires or on-demand for the web cache records by the client API, the cached record CRC is polled and obtained from the website server. If the n CRC bits at server are found to be modified and the change is found to be much higher than a given threshold (i.e., a significant change), then the modified part of the website hypertext or database is retrieved by the client device for use by the API. However, if the change is minor, then the API uses the previous cache. Since N » n, the power dissipated in the web cache maintenance method (in which invalidation reports and all invalidated record bits are transmitted) is much greater than that in the present method (in which the device polls for the significant change in the CRC bits at server and the records are transmitted only when there is a significant change in the CRC bits).

 

Introduction: Client-server computing is a distributed computing architecture, in which there are two types of nodes, i.e., the clients and the servers. A server is defined as a computing system, which responds to requests from one or more clients. A client is defined as a computing system, which requests the server for a resource or for executing a task.

The client can either access the data records at the server or it can cache these records at the client device. The data can be accessed either on client request or through broadcasts or distribution from the server. The client and the server can be on the same computing system or on different computing systems. Client-server computing can have N-tier architecture (N= 1, 2 ...). When the client and the server are on the same computing system then the number of tiers, N = 1. When the client and the server are on different computing systems on the network, then N = 2. A command interchange protocol (e.g., HTTP) is used for obtaining the client requests at the server or the server responses at the client.

The following subsections describe client-server computing in 2, 3, or N-tier architectures. Each tier connects to the other with a connecting, synchronizing, data, or command interchange protocol.

Two-tier Client-Server Architecture: The following figure shows the application server at the second tier. The data records are retrieved using business logic and a synchronization server in the application server synchronizes with the local copies at the mobile devices. Synchronization means that when copies of records at the server-end are modified, the copies cached at the client devices should also be accordingly modified. The APIs are designed independent of hardware and software platforms as far as possible as different devices may have different platforms.

 

 

 

Introduction: The five types of contexts that is important in context-aware computing are-physical context, computing context, user context, temporal context, and structural context.

1. Physical Context:The context can be that of the physical environment. The parameters for defining a physical context are service disconnection, light level, noise level, and signal strength. For example, if there is service disconnection during a conversation, the mobile device can sense the change in the physical conditions and it interleaves background noise so that the listener does not feel the effects of the disconnection. Also, the mobile device can sense the light levels, so during daytime the display brightness is increased and during night time or in poor light conditions, the device display brightness is reduced. The physical context changes and the device display is adjusted accordingly.

2. Computing Context: The context in a context-aware computing environment may also be computing context. Computing context is defined by interrelationships and conditions of the network connectivity protocol in use (Bluetooth, ZigBee, GSM, GPRS, or CDMA), bandwidth, and available resources. Examples of resources are keypad, display unit, printer, and cradle. A cradle is the unit on which the mobile device lies in order to connect to a computer in the vicinity. Consider a mobile device lying on a cradle. It discovers the computing context and uses ActiveSync to synchronize and download from the computer. When a mobile device lies in the vicinity of a computer with a Bluetooth interface, it discovers another computing context resource and uses wireless Bluetooth for connecting to the computer. When it functions independently and connects to a mobile network, it discovers another computing context and uses a GSM, CDMA, GPRS, or EDGE connection. The response of the system is as per the computing context, i.e., the network connectivity protocol.

3. User Context: The user context is defined user location, user profiles, and persons near the user. Reza B 'Far defines user-interfaces context states as follows—'within the realm of user interfaces, we can define context as the sum of the relationships between the user interface components, the condition of the user, the primary intent of the system, and all of the other elements that allow users and computing systems to communicate.

4.Temporal Context: Temporal context defines the interrelation between time and the occurrence of an event or action. A group of interface components have an intrinsic or extrinsic temporal context. For example, assume that at an instant the user presses the switch for dial in a mobile device. At the next instant the device seeks a number as an input. Then user will consider it in the context of dialling and input the number to be dialled. Now, assume that at another time the user presses the switch to add a contact in the mobile device. The device again prompts the user to enter a number as an input. The user will consider it in context of the number to be added in the contacts and stored in the device for future use. The device then seeks the name of the contact as the input. Response of the system in such cases is as per the temporal context. The context for the VUI (voice user interface) elements also defines a temporal context (depending upon the instances and sequences in which these occur).

5.Structural Context:Structural context defines a sequence and structure formed by the elements or records. Graphic user interface (GUI) elements have structural context. Structural context may also be extrinsic for some other type of context. Interrelation among the GUI elements depends on structural positions on the display screen. When time is the context, then the hour and minute elements.

Introduction: A transaction is the execution of interrelated instructions in a sequence for a specific operation on a database. Database transaction models must maintain data integrity and must enforce a set of rules called ACID rules.

These rules are as follows:

1. Atomicity:All operations of a transaction must be complete. In case, a transaction cannot be completed; it must be undone (rolled back). Operations in a transaction are assumed to be one indivisible unit (atomic unit).

2. Consistency: A transaction must be such that it preserves the integrity constraints and follows the declared consistency rules for a given database. Consistency means the data is not in a contradictory state after the transaction.

3. Isolation:If two transactions are carried out simultaneously, there should not be any interference between the two. Further, any intermediate results in a transaction should be invisible to any other transaction.

4. Durability: After a transaction is completed, it must persist and cannot be aborted or discarded. For example, in a transaction entailing transfer of a balance from account A to account B, once the transfer is completed and finished there should be no roll back.

Consider a base class library included in Microsoft.NET. It has a set of computer software components called ADO.NET (ActiveX Data Objects in .NET). These can be used to access the data and data services including for access and modifying the data stored in relational database systems. The ADO.NET transaction model permits three transaction commands:

1.   BeginTransaction: It is used to begin a transaction. Any operation after

      BeginTransactionis assumed to be a part of the transaction till the CommitTransactioncommand or the RollbackTransaction

      command. An example of a command is as follows:

       connectionA.open();

       transA = connectionA.BeginTransaction();

       Here connectionAand transAare two distinct objects.

2.  Commit: It is used to commit the transaction operations that were carried out after the BeginTransactioncommand and up to this command. An        example of this is transA.Commit();

      All statements between BeginTransactionand commit must execute automatically.

3.    Rollback: It is used to rollback the transaction in case an exception is generated after the BeginTransactioncommand is executed.

A DBMS may provide for auto-commit mode. Auto-commit mode means the transaction finished automatically even if an error occurs in between.

 

Introduction: Data Dissemination: Ongoing advances in communications including the proliferation of internet, development of mobile and wireless networks, and high bandwidth availability to homes have led to development of a wide range of new-information centered applications. Many of these applications involve data dissemination, i.e. delivery of data from a set of producers to a larger set of consumers.

Data dissemination entails distributing and pushing data generated by a set of computing systems or broadcasting data from audio, video, and data services. The output data is sent to the mobile devices. A mobile device can select, tune and cache the required data items, which can be used for application programs.

Efficient utilization of wireless bandwidth and battery power are two of the most important problems facing software designed for mobile computing. Broadcast channels are attractive in tackling these two problems in wireless data dissemination. Data disseminated through broadcast channels can be simultaneously accessed by an arbitrary number of mobile users, thus increasing the efficiency of bandwidth usage.

One key aspect of dissemination-based applications is their inherent communications asymmetry. That is, the communication capacity or data volume in the downstream direction (from servers-to-clients) is much greater than that in the upstream direction (from clients-to-servers). Content delivery is an asymmetric process regardless of whether it is performed over a symmetric channel such as the internet or over an asymmetric one, such as cable television (CATV) network. Techniques and system architectures that can efficiently support asymmetric applications will therefore be a requirement for future use.

Mobile communication between a mobile device and a static computer system is intrinsically asymmetric. A device is allocated a limited bandwidth. This is because a large number of devices access the network. Bandwidth in the downstream from the server to the device is much larger than the one in the upstream from the device to the server. This is because mobile devices have limited power resources and also due to the fact that faster data transmission rates for long intervals of time need greater power dissipation from the devices. In GSM networks data transmission rates go up to a maximum of 14.4 kbps for both uplink and downlink. The communication is symmetric and this symmetry can be maintained because GSM is only used for voice communication.

The above figure shows communication asymmetry in uplink and downlink in a mobile network. The participation of device APIs and distributed computing systems in the running of an application is also shown. 



Introduction: A hybrid data-delivery mechanism integrates pushes and pulls. The hybrid mechanism is also known as interleaved-push-and-pull (IPP) mechanism. The devices use the back channel to send pull requests for records, which are not regularly pushed by the front channel. The front channel uses algorithms modeled as broadcast disks and sends the generated interleaved responses to the pull requests.

The user device or computing system pulls as well receives the pushes of the data records from the service provider's application server or database server or from a set of distributed computing systems. Best example would be a system for advertising and selling music albums. The advertisements are pushed and the mobile devices pull for buying the album.

The above figure shows a hybrid interleaved, push-pull-based data-delivery mechanism in which a device pulls (demands) from a server and the server interleaves the responses along with the pushes of the data records generated by a set of distributed computing systems. Hybrid mechanisms function in the following manner:

1. There are two channels, one for pushes by front channel and the other for pulls by back channel.

2. Bandwidth is shared and adapted between the two channels depending upon the number of active devices receiving data from the server and the number of devices requesting data pulls from the server.

3. An algorithm can adaptively chop the slowest level of the scheduled pushes successively The data records at lower level where the records are assigned lower priorities can have long push intervals in a broadcasting model.

Advantages of Hybrid mechanisms:

 The number of server interruptions and queued requests are significantly reduced.

Disadvantages:

IPP does not eliminate the typical server problems of too many interruptions and queued requests.

Another disadvantage is that adaptive chopping of the slowest level of scheduled pushes.

 


Introduction:

Temporal Addressing:  Temporal addressing is a technique used for pushing in which instead of repeating I several times, a temporal value is repeated before a data record is transmitted. When temporal information contained in this value is used instead of address, there can be effective synchronization of tuning and caching of the record of interest in case of non-uniform time intervals between the successive bits. The device remains idle and starts tuning by synchronizing as per the temporal (time)-information for the pushed record. Temporal information gives the time at which cache is scheduled. Assume that temporal address is 25675 and each address corresponds to wait of 1 ms, the device waits and starts synchronizing the record after 25675 ms.

Broadcast Addressing:Broadcast addressing uses a broadcast address similar to IP or multicast address. Each device or group of devices can be assigned an address. The devices cache the records which have this address as the broadcasting address in a broadcast cycle. This address can be used along with the pushed record. A device uses broadcast address in place of the index I to select the data records or sets. Only the addressed device(s) caches the pushed record and other devices do not select and tune to the record. In place of repeating I several times, the broadcast address can be repeated before a data record is transmitted. The advantage of using this type of addressing is that the server addresses to specific device or specific group of devices.

Use of Headers:A server can broadcast a data in multiple versions or ways. An index or address only specifies where the data is located for the purpose of tuning. It does not specify the details of data at the buckets. An alternative is to place a header or a header with an extension with a data object before broadcasting. Header is used along with the pushed record. The device uses header part in place of the index / and in case device finds from the header that the record is of interest, it selects the object and caches it. The header can be useful, for example it can give information about the type, version, and content modification data or application for which it is targeted.

 Introduction: (1, m) Index: The (1, m) indexing scheme is an index allocation method where a complete index is broadcast m times during a broadcast. All buckets have an offset to the beginning of the next index segment. The first bucket of each index segment has a tuple containing two fields. The first field contains the key value of the object that was broadcast last and the second field is an offset pointing to the beginning of the next broadcast. This tuple guides clients who missed the required object in the current broadcast so that they can tune to the next broadcast.

The client’s access protocol for retrieving objects with key value k is as follows:

1. Tune into the current bucket on the broadcast channel. Get the offset to the next index segment.

2. Go to the doze mode and tune in at the broadcast of the next index segment.

3. Examine the tuple in the first bucket of the index segment. If the target object has been missed, obtain the offset to the beginning of the next bcast and goto 2; otherwise goto 4.

4.Traverse the index and determine the offset to the target data bucket. This may be accomplished by successive probes, by following the pointers in the multi-level index. The client may doze off between two probes.

5. Tune in when the desired bucket is broadcast, and download it(and subsequent ones as long as their key is k).

Advantage:

1. This scheme has good tuning time.

Disadvantage:

1. The index is entirely replicated m times; this increases the length of the broadcast cycle and hence the average access time.

The optimal m value that gives minimal average access time is (data file size/index size)1/2.There is actually no need to replicate the complete index between successive data blocks. It is sufficient to make available only the portion of index related to the data buckets which follow it. This is the approach adopted in all the subsequent indexing schemes.

Tree-based Index/Distributed indexing scheme

In this scheme a data file is associated with a B -tree index structure. Since the broadcast medium is a sequential medium, the data file and index must be flattened so that the data are index are broadcast following a preorder traversal of the tree. The index comprises two portions: the first k levels of the index will be partially replicated in the broadcast, and the remaining levels will not be replicated. The index nodes at the (k 1)th level are called the non-replicated roots.

Essentially, each index subtree whose root is a non-replicated root will appear once in the whole bcast just in front of the set of data segments it indexes. On the other hand, the nodes at the replicated levels are replicated at the beginning of the first broadcast of each of its children nodes.

To facilitate selective tuning, each node contains meta-data that help in the traversal of the trees. All non-replicated buckets contain pointers that will direct the search to the next copy of its replicated ancestors. On the other hand, all replicated index buckets contain two tuples that can direct the search to continue in the appropriate segments. The first tuple is a pair(x, ptrbegin) that indicates that key values less than x have been missed and so search must continue from the beginning of the next bcast(which is ptrbegin buckets away). The second pair (y, ptr) indicates that key values greater than or equal to y can be found ptr offset away. Clearly, if the desired object has key value between x and y, the search can continue as in conventional search operation. 


 

  

 


 

   

Mobile Computing consists of two words, namely mobile, meaning a device or cellular that refers to the ability of the device to be used when moving or moving (portability) and computing or compute, namely calculations that refer to information processing that includes calculations, storage, processing of data or information using the help of computer devices. Mobile computing refers to the use of computing devices that can be used freely by their users even when moving. Mobile computing has an important role in everyday life. Mobile computing makes activities easier because it is wireless and portable, allowing users to access various information services, communications and applications needed anywhere and anytime flexibly. Here are some of the roles of mobile computing in several aspects: 


1. Access to Information and Communication. 


Mobile Computing allows users to access information wherever and whenever they need it. Mobile computing contributes positively during the Covid-19 pandemic, this allows workers to work from home and students to get learning from home considering that crowds are something that is avoided to prevent the spread of the virus.  Mobile computing also allows users to communicate flexibly, for example by using WhatsApp. 


2. Financial and banking transactions


Financial transactions are now increasingly advanced and make it easier for users because they can be done anytime without having to visit a bank or ATM machine.


3. Navigation and Transportation


There are applications based on mobile computing such as Google Maps and Maps.Me that can help users find their destination easily and quickly. So with this application, users can get information in real time, for example, users want to go to restaurant X but a road to get there is congested, then this maps application can show alternative routes that are faster to reach the destination.


4. Health and fitness


One example of a tool based on mobile computing with a positive role in the health and fitness aspect is a smartwatch that has a heart rate and blood flow detection feature, one of its applications is in athlete training, if the athlete is too tired, his heart rate will move faster, then a reminder alarm will sound. This can avoid bad things about the athlete's condition.


 5. E-commerce


Users are now greatly facilitated by the existence of online store applications based on mobile computing. Users are facilitated without having to visit the store directly, simply use the application on the computing device then the goods needed by the user will be sent directly. 


The 1970s became the early history of mobile computing, starting with the emergence of large and heavy portable computers used for military purposes. Devices such as the Osborne 1 (1981) and the Compac Portable became the first portable computers that were very popular among consumers.


2. The Development of Mobile Computing in the 1980s


There was a significant development this year. Several features such as handwriting recognition and touch screens appeared.


3. New innovations in the 1990s


Several new innovations emerged in the 1990s, such as the emergence of the IBM Simon in 1992 as the first smartphone to combine a telephone with computing capabilities. Wireless communication technology continued to develop in the 1990s. Wireless communication standards such as Bluetooth and WiFi were introduced.


4. The Smartphone Era


The smartphone era began in the 2000s.  The famous mobile phone in 2002 was the Blackberry which was famous for its ability to send messages in real time. Then, in 2007 Apple introduced the iPhone which made a big change in the smartphone world with an innovative interface and a wide application ecosystem. 


5. Development of applications and services 


There are many applications and services based on mobile computing that have emerged. For example, E-commerce which can be downloaded from application stores such as Playstore and Apple Appstore. Mobile computing continues to evolve with better connectivity, namely the development of wireless networks, especially 3G, 4G and 5G technologies which are increasingly developing. 


6. Internet of Things 


Mobile computing has also been integrated with the Internet of Things (IoT). Smart devices such as Smartwatches and Smart Home Devices, and connected vehicles are increasingly common, bringing connectivity and computing capabilities into various aspects of everyday life. 


Mobile Computing has become an inseparable part of everyday life. Mobile computing is hardware that can be used flexibly even when moving.  So that it allows users to use it whenever and wherever they need it. In its development history, Mobile Computing has evolved very well starting from its initial emergence in the 1970s to the increasingly advanced development as we feel today. 


Physical  mobile computing : 

1. Mobile communication

2. Mobile hardware 

3. Mobile software 


Application of mobile computing  : 

1. For estate agent

2. In courts 

3. In companies

4. Stock information  collection and control

5. Credit card verification 

6. Taxi and truck  dispatch 

7. Electronic mail / paging  

8. Mobile banking 

9.  Cryptocurencies

10.  Cryptography look like google 2  step 

     ANd Qris barcode .


 

Future  mobile computing : 

1. Use of artificial inteligence 

2. Integrated circuitry ---> compact size

3. Increases in computer processor speeds 


Current trend : 


 1. Artificial inteligence  (AI )

2. Internet of things ( IOT )

3.5G network connectivity

4. Mobile payment and mobile commerce 


====================================

 

  


   



====================================

 

 

 

 


 

 

 

 




Komentar

Postingan populer dari blog ini

Blockchain technology operating system surveyor at knowledge and skill Operating System is a program that acts as an intermediary between multi task user of a computer and the computer hardware ( Overview and Examples ) # Manguntam check point software tech view my R & D development .

Block chain of work on telemetry data transmission for modern electronic machine networks

Internet Of Things work at digital currency