Saturday, 30 July 2016

ABOUT light-emitting diode (LED)

A light-emitting diode (LED) is a semiconductor device that emits visible light when an electric current passes through it. The light is not particularly bright, but in most LEDs it is monochromatic, occurring at a single wavelength. The output from an LED can range from red (at a wavelength of approximately 700 nanometers) to blue-violet (about 400 nanometers). Some LEDs emit infrared (IR) energy (830 nanometers or longer); such a device is known as an infrared-emitting diode (IRED).
An LED or IRED consists of two elements of processed material called P-type semiconductors and N-type semiconductors. These two elements are placed in direct contact, forming a region called the P-N junction. In this respect, the LED or IRED resembles most other diode types, but there are important differences. The LED or IRED has a transparent package, allowing visible or IR energy to pass through. Also, the LED or IRED has a large PN-junction area whose shape is tailored to the application.
Benefits of LEDs and IREDs, compared with incandescent and fluorescent illuminating devices, include:
  • Low power requirement: Most types can be operated with battery power supplies.
  • High efficiency: Most of the power supplied to an LED or IRED is converted into radiation in the desired form, with minimal heat production.
  • Long life: When properly installed, an LED or IRED can function for decades.
Typical applications include:
  • Indicator lights: These can be two-state (i.e., on/off), bar-graph, or alphabetic-numeric readouts.
  • LCD panel backlighting: Specialized white LEDs are used in flat-panel computer displays.
  • Fiber optic data transmission: Ease of modulation allows wide communications bandwidth with minimal noise, resulting in high speed and accuracy.
  • Remote control: Most home-entertainment "remotes" use IREDs to transmit data to the main unit.
  • Optoisolator: Stages in an electronic system can be connected together without unwanted interaction.

Saturday, 16 July 2016

Stacks and Queues

An array is a random access data structure, where each element can be accessed directly and in constant time. A typical illustration of random access is a book - each page of the book can be open independently of others. Random access is critical to many algorithms, for example binary search.
A linked list is a sequential access data structure, where each element can be accesed only in particular order. A typical illustration of sequential access is a roll of paper or tape - all prior material must be unrolled in order to get to data you want.
In this note we consider a subcase of sequential data structures, so-called limited access data sturctures.

Stacks

A stack is a container of objects that are inserted and removed according to the last-in first-out (LIFO) principle. In the pushdown stacks only two operations are allowed: push the item into the stack, and pop the item out of the stack. A stack is a limited access data structure - elements can be added and removed from the stack only at the top. push adds an item to the top of the stack, pop removes the item from the top. A helpful analogy is to think of a stack of books; you can remove only the top book, also you can add a new book on the top.A stack is a recursive data structure. Here is a structural definition of a Stack:

Applications

  • The simplest application of a stack is to reverse a word. You push a given word to stack - letter by letter - and then pop letters from the stack.
  • Another application is an "undo" mechanism in text editors; this operation is accomplished by keeping all text changes in a stack.
  • Backtracking. This is a process when you need to access the most recent data element in a series of elements. Think of a labyrinth or maze - how do you find a way from an entrance to an exit?
    Once you reach a dead end, you must backtrack. But backtrack to where? to the previous choice point. Therefore, at each choice point you store on a stack all possible choices. Then backtracking simply means popping a next choice from the stack.
  • Language processing:
    • space for parameters and local variables is created internally using a stack.
    • compiler's syntax check for matching braces is implemented by using stack.
    • support for recursion.
    • Implementation

      In the standard library of classes, the data type stack is an adapter class, meaning that a stack is built on top of other data structures. The underlying structure for a stack could be an array, a vector, an ArrayList, a linked list, or any other collection. Regardless of the type of the underlying data structure, a Stack must implement the same functionality. This is achieved by providing a unique interface:
      public interface StackInterface<AnyType>
      {
         public void push(AnyType e);
      
         public AnyType pop();
      
         public AnyType peek();
      
         public boolean isEmpty();
      }
      
      The following picture demonstrates the idea of implementation by composition.
Another implementation requirement (in addition to the above interface) is that all stack operations must run in constant time O(1). Constant time means that there is some constant k such that an operation takes k nanoseconds of computational time regardless of the stack size.

Array-based implementation
 In an array-based implementation we maintain the following fields: an array A of a default size (≥ 1), the variable top that refers to the top element in the stack and the capacity that refers to the array size. The variable top changes from -1 to capacity - 1. We say that a stack is empty when top = -1, and the stack is full when top = capacity-1.In a fixed-size stack abstraction, the capacity stays unchanged, therefore when top reaches capacity, the stack object throws an exception.
In a dynamic stack abstraction when top reaches capacity, we double up the stack size.


Linked List-based implementation

Linked List-based implementation provides the best (from the efficiency point of view) dynamic stack implementation.

Queues

A queue is a container of objects (a linear collection) that are inserted and removed according to the first-in first-out (FIFO) principle. An excellent example of a queue is a line of students in the food court of the UC. New additions to a line made to the back of the queue, while removal (or serving) happens in the front. In the queue only two operations are allowed enqueue and dequeue. Enqueue means to insert an item into the back of the queue, dequeue means removing the front item. The picture demonstrates the FIFO access.The difference between stacks and queues is in removing. In a stack we remove the item the most recently added; in a queue, we remove the item the least recently added.

Implementation

In the standard library of classes, the data type queue is an adapter class, meaning that a queue is built on top of other data structures. The underlying structure for a queue could be an array, a Vector, an ArrayList, a LinkedList, or any other collection. Regardless of the type of the underlying data structure, a queue must implement the same functionality. This is achieved by providing a unique interface.
interface QueueInterface‹AnyType>
{
   public boolean isEmpty();

   public AnyType getFront();

   public AnyType dequeue();

   public void enqueue(AnyType e);

   public void clear();
}
Each of the above basic operations must run at constant time O(1). The following picture demonstrates the idea of implementation by composition.

Applications

The simplest two search techniques are known as Depth-First Search(DFS) and Breadth-First Search (BFS). These two searches are described by looking at how the search tree (representing all the possible paths from the start) will be traversed.

Deapth-First Search with a Stack

In depth-first search we go down a path until we get to a dead end; then we backtrack or back up (by popping a stack) to get an alternative path.
  • Create a stack
  • Create a new choice point
  • Push the choice point onto the stack
  • while (not found and stack is not empty)
    • Pop the stack
    • Find all possible choices after the last one tried
    • Push these choices onto the stack
  • Return

Breadth-First Search with a Queue

In breadth-first search we explore all the nearest possibilities by finding all possible successors and enqueue them to a queue.
  • Create a queue
  • Create a new choice point
  • Enqueue the choice point onto the queue
  • while (not found and queue is not empty)
    • Dequeue the queue
    • Find all possible choices after the last one tried
    • Enqueue these choices onto the queue
  • Return
We will see more on search techniques later in the course.

Arithmetic Expression Evaluation

An important application of stacks is in parsing. For example, a compiler must parse arithmetic expressions written using infix notation:
1 + ((2 + 3) * 4 + 5)*6
We break the problem of parsing infix expressions into two stages. First, we convert from infix to a different representation called postfix. Then we parse the postfix expression, which is a somewhat easier problem than directly parsing infix.
Converting from Infix to Postfix. Typically, we deal with expressions in infix notation
2 + 5
where the operators (e.g. +, *) are written between the operands (e.q, 2 and 5). Writing the operators after the operands gives a postfix expression 2 and 5 are called operands, and the '+' is operator. The above arithmetic expression is called infix, since the operator is in between operands. The expression
2 5 +
Writing the operators before the operands gives a prefix expression
+2 5
Suppose you want to compute the cost of your shopping trip. To do so, you add a list of numbers and multiply them by the local sales tax (7.25%):
70 + 150 * 1.0725
Depending on the calculator, the answer would be either 235.95 or 230.875. To avoid this confusion we shall use a postfix notation
70  150 + 1.0725 *
Postfix has the nice property that parentheses are unnecessary.
Now, we describe how to convert from infix to postfix.
  1. Read in the tokens one at a time
  2. If a token is an integer, write it into the output
  3. If a token is an operator, push it to the stack, if the stack is empty. If the stack is not empty, you pop entries with higher or equal priority and only then you push that token to the stack.
  4. If a token is a left parentheses '(', push it to the stack
  5. If a token is a right parentheses ')', you pop entries until you meet '('.
  6. When you finish reading the string, you pop up all tokens which are left there.
  7. Arithmetic precedence is in increasing order: '+', '-', '*', '/';
Example. Suppose we have an infix expression:2+(4+3*2+1)/3. We read the string by characters.
'2' - send to the output.
'+' - push on the stack.
'(' - push on the stack.
'4' - send to the output.
'+' - push on the stack.
'3' - send to the output.
'*' - push on the stack.
'2' - send to the output.
Evaluating a Postfix Expression. We describe how to parse and evaluate a postfix expression.
  1. We read the tokens in one at a time.
  2. If it is an integer, push it on the stack
  3. If it is a binary operator, pop the top two elements from the stack, apply the operator, and push the result back on the stack.
Consider the following postfix expression
5 9 3 + 4 2 * * 7 + *
Here is a chain of operations
Stack Operations              Output
--------------------------------------
push(5);                        5
push(9);                        5 9
push(3);                        5 9 3
push(pop() + pop())             5 12
push(4);                        5 12 4
push(2);                        5 12 4 2
push(pop() * pop())             5 12 8
push(pop() * pop())             5 96
push(7)                         5 96 7
push(pop() + pop())             5 103
push(pop() * pop())             515
Note, that division is not a commutative operation, so 2/3 is not the same as 3/2.

Monday, 11 July 2016

GRAPHICS CARDS


A Graphics Card is a piece of computer hardware that produces the image you see on a monitor.
The Graphics Card is responsible for rendering an image to your monitor, it does this by converting data into a signal your monitor can understand.
The better your graphics card the better, and smoother an image can be produced. This is naturally very important for gamers and video editors.
The images you see on your monitorare made of tiny dots called pixels. At most common resolution settings, a screen displays over a million pixels, and the computer has to decide what to do with every one in order to create an image. To do this, it needs a translator -- something to take binary data from the CPU and turn it into a picture you can see. Unless a computer has graphics capability built into themotherboard, that translation takes place on the graphics card.
A graphics card's job is complex, but its principles and components are easy to understand. In this article, we will look at the basic parts of a video card and what they do. We'll also examine the factors that work together to make a fast, efficient graphics card..

Saturday, 9 July 2016

SOUND CARDS(history,functions,uses,types,etc.)


A sound card (also known as an audio card) is an internal computer expansion card that facilitates economical input and output of audio signals to and from a computer under control of computer programs. The term sound card is also applied to external audio interfaces that use software to generate sound, as opposed to using hardware inside the PC. Typical uses of sound cards include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation, education and entertainment (games) and video projection.

Sound functionality can also be integrated onto the motherboard, using basically the same components as a plug-in card. The best plug-in cards, which use better and more expensive components, can achieve higher quality than integrated sound. The integrated sound system is often still referred to as a "sound card".

ColorFunctionConnectorsymbol
 PinkAnalog microphone audio input.3.5 mmminijackA microphone
 Light blueAnalog line level audio input.3.5 mmminijackAn arrow going into a circle
 Lime greenAnalog line level audio output for the main stereo signal (front speakers or headphones).3.5 mmminijackArrow going out one side of a circle into a wave
 OrangeAnalog line level audio output for center channel speaker and subwoofer.3.5 mmminijack
 BlackAnalog line level audio output for surround speakers, typically rear stereo.3.5 mmminijack
 Silver/GreyAnalog line level audio output for surround optional side channels.3.5 mmminijack
 Brown/DarkAnalog line level audio output for a special panning, 'Right-to-left speaker'.3.5 mmminijack
 Gold/GreyGame port / MIDI15 pin DArrow going out both sides into waves




The main function of a sound card is to play audio, usually music, with varying formats (monophonic, stereophonic, various multiple speaker setups) and degrees of control. The source may be a CD or DVD, a file, streamed audio, or any external source connected to a sound card input.

Audio may be recorded. Sometimes sound card hardware and drivers do not support recording a source that is being played.

A card can also be used, in conjunction with software, to generate arbitrary wave forms, acting as an audio-frequency function generator. Free and commercial software is available for this purpose.there are also online services that generate audio files for any desired wave forms, playable through a sound card.

A card can be used, again in conjunction with free or commercial software, to analyse input waveforms. For example, a very-low-distortion sinewave oscillator can be used as input to equipment under test; the output is sent to a sound card's line input and run through Fourier transform software to find the amplitude of each harmonic of the added distortion. Alternatively, a less pure signal source may be used, with circuitry to subtract the input from the output, attenuated and phase-corrected; the result is distortion and noise only, which can be analysed.

There are programs which allow a sound card to be used as an audio-frequency oscilloscope.

For all measurement purposes a sound card must be chosen with good audio properties. It must itself contribute as little distortion and noise as possible, and attention must be paid to bandwidth and sampling. A typical integrated sound card, the Realtek ALC887, according to its data sheet has distortion of about 80dB below the fundamental; cards are available with distortion better than -100dB.


Driver architecture


To use a sound card, the operating system (OS) typically requires a specific device driver, a low-level program that handles the data connections between the physical hardware and the operating system. Some operating systems include the drivers for many cards; for cards not so supported, drivers are supplied with the card, or available for download.
•DOS programs for the IBM PC often had to use universal middleware driver libraries (such as the HMI Sound Operating System, the Miles Audio Interface Libraries (AIL), the Miles Sound System etc.) which had drivers for most common sound cards, since DOS itself had no real concept of a sound card. Some card manufacturers provided (sometimes inefficient) middleware TSR-based drivers for their products. Often the driver is a Sound Blaster and AdLib emulator designed to allow their products to emulate a Sound Blaster and AdLib, and to allow games that could only use SoundBlaster or AdLib sound to work with the card. Finally, some programs simply had driver/middleware source code incorporated into the program itself for the sound cards that were supported.
•Microsoft Windows uses drivers generally written by the sound card manufacturers. Many device manufacturers supply the drivers on their own discs or to Microsoft for inclusion on Windows installation disc. Sometimes drivers are also supplied by the individual vendors for download and installation. Bug fixes and other improvements are likely to be available faster via downloading, since CDs cannot be updated as frequently as a web or FTP site. USB audio device class support is present from Windows 98 SE onwards. Since Microsoft's Universal Audio Architecture (UAA) initiative which supports the HD Audio, FireWire andUSB audio device class standards, a universal class driver by Microsoft can be used. The driver is included with Windows Vista. For Windows XP, Windows 2000or Windows Server 2003, the driver can be obtained by contacting Microsoft support.[15] Almost all manufacturer-supplied drivers for such devices also include this class driver.
•A number of versions of UNIX make use of the portable Open Sound System (OSS). Drivers are seldom produced by the card manufacturer.
•Most present day Linux distributions make use of the Advanced Linux Sound Architecture (ALSA). Up until Linux kernel 2.4, OSS was the standard sound architecture for Linux, although ALSA can be downloaded, compiled and installed separately for kernels 2.2 or higher. But from kernel 2.5 onwards, ALSA was integrated into the kernel and the OSS native drivers were deprecated. Backwards compatibility with OSS-based software is maintained, however, by the use of the ALSA-OSS compatibility API and the OSS-emulation kernel modules.
•Mockingboard support on the Apple II is usually incorporated into the programs itself as many programs for the Apple II boot directly from disk. However a TSR is shipped on a disk that adds instructions to Apple Basic so users can create programs that use the card, provided that the TSR is loaded first

Thursday, 7 July 2016

MONITORS

A monitor or a display is an electronic visual display for computers. The monitor comprises the display device, circuitry and an enclosure. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors used a cathode ray tube (CRT) about as deep as the screen size.

Originally, computer monitors were used for data processing while television receivers were used for entertainment. From the 1980s onwards, computers (and their monitors) have been used for both data processing and entertainment, while televisions have implemented some computer functionality. The common aspect ratioof televisions, and then computer monitors, has also changed from 4:3 to 16:9.

History Early electronic computers were fitted with a panel of light bulbs where the state of each particular bulb would indicate the on/off state of a particular registerbit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information, and were very transient, they were rarely considered for programme output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the programme's operation.

As technology developed it was realized that the output of a CRT display was more flexible than a panel of light bulbs and eventually, by giving control of what was displayed to the programme itself, the monitor itself became a powerful output device in its own right.

HISTORY

The first computer monitors used cathode ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal(VDT) using a CRT to be physically integrated with a keyboard and other components of the system in a single large chassis. The display was monochrome and far less sharp and detailed than on a modern flat-panel monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for specialized military, industrial and scientific applications but they were far too costly for general use.

Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a standard feature of the pioneering Apple II, introduced in 1977, and the specialty of the more graphically sophisticated Atari 800, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 x 200 pixels, or it could produce 640 x 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of 640 x 350.

By the end of the 1980s color CRT monitors that could clearly display 1024 x 768 pixels were widely available and increasingly affordable. During the following decade maximum display resolutions gradually increased and prices continued to fall. CRT technology remained dominant in the PC monitor market into the new millennium partly because it was cheaper to produce and offered viewing angles close to 180 degrees.[2] CRTs still offer some image quality advantages over LCD displays but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry.

Liquid crystal display


There are multiple technologies that have been used to implement liquid crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines.

TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors.

The first standalone LCD displays appeared in the mid-1990s selling for high prices. As prices declined over a period of years they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors was the Eizo L66 in the mid-1990s, the Apple Studio Display in 1998, and the Apple Cinema Display in 1999. In 2003, TFT-LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The main advantages of LCDs over CRT displays are that LCDs consume less power, take up much less space, and are considerably lighter. The now common active matrix TFT-LCD technology also has less flickering than CRTs, which reduces eye strain.On the other hand, CRT monitors have superior contrast, have superior response time, are able to use multiple screen resolutions natively, and there is no discernible flicker if the refresh rate is set to a sufficiently high value. LCD monitors have now very high temporal accuracy and can be used for vision research.

Wednesday, 6 July 2016

HARD DISK


A hard disk drive (HDD) is a data storage device used for storing and retrieving digital information using rapidly rotating disks (platters) coated with magnetic material. An HDD retains its data even when powered off. Data is read in a random-access manner, meaning individual blocks of data can be stored or retrieved in any order rather than sequentially. An HDD consists of one or more rigid ("hard") rapidly rotating disks (platters) with magnetic heads arranged on a moving actuator arm to read and write data to the surfaces.

Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers andpersonal computers. More than 200 companies have produced HDD units, though most current units are manufactured by Seagate, Toshiba and Western Digital. Worldwide disk storage revenues were US $32 billion in 2013, down 3% from 2012.

The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixescorresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Performance is specified by the time required to move the heads to a track or cylinder (average access time) plus the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate).

The two most common form factors for modern HDDs are 3.5-inch in desktop computers and 2.5-inch in laptops. HDDs are connected to systems by standard interface cables such as SATA (Serial ATA), USB or SAS (Serial attached SCSI) cables.

As of 2015, the primary competing technology for secondary storage is flash memory in the form of solid-state drives(SSDs). HDDs are the dominant medium for secondary storage due to advantages in price per unit of storage and recording capacity. However, SSDs are replacing HDDs where speed, power consumption and durability are more important considerations.

CMOS BATTERY(use,working,history,replacement,etc)

Nonvolatile BIOS memory refers to a small memory on PC motherboards that is used to store BIOS settings. It was traditionally called CMOS RAM because it used a volatile, low-power complementary metal-oxide-semiconductor (CMOS) SRAM (such as theMotorola MC146818 or similar) powered by a small battery when system power was off (called the CMOS battery).
The term remains in wide use but it has grown into a misnomer: nonvolatile storage in contemporary computers is often in EEPROM or flash memory (like the BIOS code itself); the remaining usage for the battery is then to keep the real-time clock (RTC) going. The typical NVRAM capacity is 512 bytes, which is generally sufficient for all BIOS settings. The CMOS RAM and the real-time clock have been integrated as a part of thesouthbridge chipset and it may not be a standalone chip on modern motherboards.

CMOS battery

Type CR2032 button cell, most common CMOS battery.
The memory battery (aka motherboard, CMOS, real-time clock-RTC, clock battery) is generally a CR2032 lithium coin cell. These cells last two to ten years, depending on the type of motherboard, ambient temperature and the length of time that the system is powered off, while other common cell types can last significantly longer or shorter periods, such as the CR2016which will generally last about 40% less than CR2032. Higher temperatures and longer power-off time will shorten cell life. When replacing the cell, the system time and CMOS BIOS settings may revert to default values. This may be avoided by replacing the cell with the power supply master switch on. On ATX motherboards, this will supply 5V power to the motherboard even if it is apparently "switched off", and keep the CMOS memory energized. In general one should not work on a computer that is powered.
Some computer designs have used non-button cell batteries, such as the cylindrical "1/2 AA" used in the Power Mac G4 as well as some older IBM PC compatibles, or a 3-cell NiCd CMOS battery that looks like a "barrel" (common in Amigas and older IBM PC compatibles), which serves the same purpose.
With (non-accesible close) computers you may need to disconnect cables, remove drives, or remove other parts of the computer to get full access to the CMOS battery.
First-4-screws replacement
First-4-screws CMOS battery replacement means that you only need open the first laptop 4 screw to replace the CMOS battery. Usually the keyboard does not need to be moved.
Extension cord[
A cable terminated with a 2 pin Molex connector plug can be used as an electrical extension cord, for an easy access to replace CMOS battery (to put the battery in the more easily accesible place).

Rechargeable CMOS battery or capacitors

Asus Eee PC series Models 1005ha 1005hab 1008ha and others use Varta ML1220 or equivalent Maxell, Sanyo andPanasonic ML1220 Lithium Ion coin cell rechargeable batteries, terminated with a 2 pin Molex connector plug.

Capacitors

Rather than using a battery, heavy duty capacitors can be used as an alternative. They would be connected where the original NiCd / NiMH battery goes.

Resetting the CMOS settings

To access the BIOS setup when the machine fails to operate, occasionally a drastic move is required. In older computers with battery-backed RAM, removal of the battery and short circuiting the battery input terminals for a while did the job; in some more modern machines this move only resets the RTC. Some motherboards offer a CMOS-reset jumper or a reset button. In yet other cases, the EEPROM chip has to be desoldered and the data in it manually edited using a programmer. Sometimes it is enough to ground the CLK or DTA line of the I²C bus of the EEPROM at the right moment during boot, this requires some precise soldering on SMD parts. If the machine lets one boot but does not want to let the user into the BIOS setup, one possible recovery is to deliberately "damage" the CMOS checksum by doing direct port writes using DOSdebug.exe, corrupting some bytes of the checksum-protected area of the CMOS RAM; at the next boot, the computer typically resets its setting to factory defaults.

How to ROOT android phones

What is rooting?


Rooting is jailbreaking for Androids and allows users to dive deeper into a phone’s sub-system. Essentially, it’ll allow you to access the entire operating system and be able to customize just about anything on your Android. With root access, you can get around any restrictions that your manufacturer or carrier may have applied. You can run more apps, you can overclock or underclock your processor, replace the firmware.
The process requires users to back up current software and flashing (installing) a new custom ROM (modified version of Android).

Why would you root?

One of the most obvious incentives to root your Android device is to rid yourself of the bloatware that’s impossible to uninstall. You’ll be able to set up wireless tethering, even if it has been disabled by default. Additional benefits include the ability to install special apps and flash custom ROMs, each of which can add extra features and streamline your phone or tablet’s performance. A lot of people are tempted by the ability to completely customize the look of their phones. You can also manually accept or deny app permissions.
You won’t find a lot of amazing must-have apps when you root, but there are enough to make it worthwhile. For example, some apps allow you to automatically backup all of your apps and all of their data, completely block advertisements, create secure tunnels to the Internet, overclock your processor, or make your device a wireless hotspot.
Related: How to disable Android apps

Why wouldn’t you root?

There are essentially three potential cons to rooting your Android.
  • Voiding your warranty: Some manufacturers or carriers will use rooting as an excuse to void your warranty. It’s worth keeping in mind that you can always unroot. If you need to send the device back for repair, simply flash the original backup ROM you made and no one will ever know that it was rooted.
  • Bricking your phone: Whenever you tamper too much, you run at least a small risk of bricking your device. The obvious way to avoid it happening is to follow instructions carefully. Make sure that the guide you are following works for your device and that any custom ROM you flash is designed specifically for it. If you do your research and pay attention to feedback from others, bricking should never occur.
  • Security risks: Rooting may introduce some security risks. Depending on what services or apps you use on your device, rooting could create a security vulnerability. For example, Google refuses to support the Google Wallet service for rooted devices.

How to root your Android

Two recent rooting programs that have garnered some attention in the past few months are Towelroot and Kingo Root. Both will root your device in the time it takes to brush your teeth. However, both rooting programs aren’t compatible with every Android device. Here’s Kingo’s list of compatible devices.
If your phone is not compatible with these devices, you’ll have to spend a little time researching ways to root on Androd forums. The best place to start is XDA Developers Forum. Look for a thread on your specific device and you’re sure to find a method that has worked for other people. It’s worth spending some time researching the right method for your device.

Preparation for root

fully charged before you begin. You’ll also need to turn USB debugging on. On Android 4.2 you’ll enable USB debugging by going to Settings>About Phone>Developer Options> and then check the box next to USB debugging.

Most Android rooting methods require you to install some software on your computer. It’s possible you’ll need to install the Android SDK. You may find other software is required. Make sure you follow the instructions on the XDA developers forum and install all of it before proceeding.

Unlock your bootloader

Before you get started, You will also need to unlock your bootloader. Bootloader is a program that determines which applications will run in your phone’s startup process.
Unlocking your bootloader will allow you to customize your device. Manufacturers have responded to a demand for customization. Many of them have provided methods to help you unlock the bootloader on their website, though they are generally provided for developers, and they usually require you to sign up or register an account first.
  • Motorola bootloader unlock program.
  • HTC unlock bootloader page
  • Sony’s unlocking the bootloader instructions.
Some manufacturers and carriers don’t allow bootloader unlocking, but you can often find a way around that with some searching (try the XDA Developers forum).

Using Towelroot

One of the easiest methods of rooting is through Towelroot. This option works on most Android devices, (it was designed to root the AT&T Samsung Galaxy S5) but not all–specifically some Motorola and HTC devices. Unlike other rooting programs that require downloading and running a program on your computer, Towelroot will root your device by simply downloading and running the app. No computer needed. However, Towelroot will only work with devices that have a kernel bill date earlier than June 3, 2014.

To use Towelroot, you’ll have to enable your device to install apps from unknown sources. This can be accessed by clicking on Settings>Security> Unknown Sources. Now you’ll be able to download apps from outside the Google Play store.
Now go to Towelroot in your phone’s browser and click on the Lambda symbol. For more information check out Gadget Hacks’ youtube video.

Using Kingo Android Root

The Windows based, Kingo Android Root is one of the easiest ways to root your Android device. First, check to see if your device is compatible with Kingo. Their site provides a list of compatible devices. Then, download Kingo Android Root and enable the USB debugging mode on your phone.
Once you’ve enabled USB debugging mode on your phone, run the program on your PC and connect your Android to your PC with a USB cord. The program should detect your device and a message asking if you’d like to root will appear. Select “root” and then hang tight. Kingo will only take a few minutes to grant super user privileges.

Rooting forums

No other mobile operating system parallels the diversity of Android OS. For this reason, there’s no universal way to root your device. If the above two options fail, don’t fret. There is likely a guide on how to root your specific device available somewhere online. Generally you can find a guide to your device on such as XDA developers’ forum and Phandroid Forums.
Once you have found the right guide for your phone or tablet, it’s simply a case of working through the listed steps methodically. It can be a complicated procedure and it can take a while. Here’s an example guide for rooting the Samsung Galaxy S4. It can appear intimidating at first glance, but provided you follow it step-by-step, it should be a pain-free process. You can post questions in the XDA Developers forum if you run into trouble.

Download Root Checker

You’ll need to download another app to make sure your device has been successfully rooted. There are several apps available on the Google Play store that, when downloaded, will tell you if you have super-rooter permission. Root Checker is a popular one. Simply downloading and running the app will tell you if your phone has super-user permissions.

Install a root management app

Rooting will make your phone more vulnerable to security threats. Installing a root management app will give you more peace of mind. Normally, every app that requires rooted privileges will ask for your approval. This is where root management apps, such as SuperSU, come in. SuperSU lets you allow or deny sites’ requests for super user permission. It will then keep track of the permissible apps and automatically grant permission next time you use the app. SuperSU will also keep track of how many times an app requests to root.
Unrooting your Android
For all the good that is rooting, you may want to go back to the way things were. SuperSU allows users to unroot phones by simply going into the app’s settings and select the full unroot option.

To root or not to root

Gaining full root access to your Android device can be thrilling, especially if you want to tinker with settings and customize your device. How much it changes your experience depends largely on the device you have. If you have a shuttered device, like a Kindle Fire tablet, then it’s a great way to get the full Android experience.The potential benefits for all Android users include improved battery life, root-only apps, custom ROMs, overclocking, an end to bloatware, improved performance, and the ability to upgrade your phone when you want. If you aren’t excited at the prospect of any of these things, rooting probably isn’t for you.


SPEAKERS

Computer speakers, or multimedia speakers, are speakers external to a computer, that disable the lower fidelity built-in speaker. They often have a low-power internal amplifier. The standard audio connection is a 3.5 mm (approximately 1/8 inch) stereo phone connector often color-coded lime green (following the PC 99 standard) for computer sound cards. A few use a RCA connector for input. There are also USB speakers which are powered from the 5 volts at 500 milli amps provided by the USB port, allowing about 2.5 watts of output power. Computer speakers were introduced by Altec Lansing in 1990.
Computer speakers range widely in quality and in price. The computer speakers typically packaged with computer systems are small, plastic, and have mediocre sound quality. Some computer speakers have equalization features such as bass and treble controls.
The internal amplifiers require an external power source, usually an AC adapter. More sophisticated computer speakers can have a subwoofer unit, to enhance bass output, and these units usually include the power amplifiers both for the bass speaker, and the small satellite speakers.
Some computer displays have rather basic speakers built-in. Laptops come with integrated speakers. Restricted space available in laptops means these speakers usually produce low-quality sound.
For some users, a lead connecting computer sound output to an existing stereo system is practical. This normally yields much better results than small low-cost computer speakers. Computer speakers can also serve as an economy amplifier for MP3 player use for those who wish to not use headphones, although some models of computer speakers have headphone jacks of their own.

Common features

A common computer icon representing a speaker
Features vary by manufacturer, but may include the following:
  • An LED power indicator.
  • A 3.5 mm headphone jack.
  • Controls for volume, and sometimes bass and treble.
  • A remote volume control or a device that uses the similar function of mouse scrolling for adjusting the volume.

Cost-cutting measures and technical compatibility

In order to cut the cost of computer speakers (unless designed for premium sound performance), speakers designed for computers often lack an AM/FM tuner and other built-in sources of audio. However, the male 3.5 mm plug can be jury rigged with female 3.5 mm TRSphone connector to female stereo RCA adapters to work with stereo system components such as CD/DVD-Audio/SACD players (although computers have CD-ROM drives of their own with audio CD support), audio cassette players, turntables, etc.
Despite being designed for computers, computer speakers are electrically compatible with the aforementioned stereo components. There are even models of computer speakers that have stereo RCA in jacks. There are more recent stereo systems that include USB ports (for thumbdrives), SD card ports, etc., however low-end computer speakers tend to be powered from USB rather than offer USB power and data-transfer (for audio) on its own, seeing as a computer can have USB ports and SD card slots to play audio from anyhow.

Tuesday, 5 July 2016

Virtual memory

In computing, virtual memory is a memory management technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage as seen by a process or task appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit or MMU, automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer.
The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging.

Properties

Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing.
Memory virtualization can be considered a generalization of the concept of virtual memory.

Usage

Virtual memory is an integral part of a modern computer architecture; implementations require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations. Consequently, older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid-1980s (e.g. DOS),generally have no virtual memory functionality,though notable exceptions for mainframes of the 1960s include:
  • the Atlas Supervisor for the Atlas
  • MCP for the Burroughs B5000
  • MTS, TSS/360 and CP/CMS for the IBM System/360 Model 67
  • Multics for the GE 645
  • the Time Sharing Operating System for the RCA Spectra 70/46
The Apple Lisa is an example of a personal computer of the 1980s that features virtual memory.
Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory.
Embedded systems and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism; virtual memory systems trigger unpredictable traps that may produce unwanted "jitter" during I/O operations. This is because embedded hardware costs are often kept low by implementing all such operations with software (a technique called bit-banging) rather than with dedicated hardware.

History

In the 1940s and 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as overlaying. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use. To allow for multiprogramming andmultitasking, many early systems divided memory between multiple programs without virtual memory, such as early models of the PDP-10 via registers.
The concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis,Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation; it described a machine with 6 100-word blocks of primary core memory and an address space of 1,000 100-word blocks, with hardware automatically moving blocks between primary memory and secondary drum memory. Paging was first implemented at the University of Manchester as a way to extend the Atlas Computer's working memory by combining its 16 thousand words of primary core memory with an additional 96 thousand words of secondary drum memory. The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959. In 1961, the Burroughs Corporation independently released the first commercial computer with virtual memory, the B5000, with segmentation rather than paging.
Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult to build specialized hardware; initial implementations slowed down access to memory slightly.There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over; an IBM research team led by David Sayre showed that their virtual memory overlay system consistently worked better than the best manually controlled systems.The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the 1970s, other minicomputers implemented virtual memory, notably VAX models running VMS.
Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor, but its segment swapping technique scaled poorly to larger segment sizes. The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling the page fault exception to chain with other exceptions without double fault. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation.

Paged virtual memory


Page tables
Nearly all implementations of virtual memory divide a virtual address space into pages, blocks of contiguous virtual memory addresses. Pages on contemporary
 systems are usually at least 4 kilobytes in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes.
Page tables are used to translate the virtual addresses seen by the application into physical addresses used by the hardware to process instructions; such hardware that handles this specific translation is often known as the memory management unit. Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a page fault exception, invoking the paging supervisor component of the operating system.
Systems can have one page table for the whole system, separate page tables for each application and segment, a tree of page tables for large segments or some combination of these. If there is only one page table, different applications running at the same time use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses.

Paging supervisor

This part of the operating system creates and manages page tables. If the hardware raises a page fault exception, the paging supervisor accesses secondary storage, returns the page that has the virtual address that resulted in the page fault, updates the page tables to reflect the physical location of the virtual address and tells the translation mechanism to restart the request.
When all physical memory is already in use, the paging supervisor must free a page in primary storage to hold the swapped-in page. The supervisor uses one of a variety of page replacement algorithms such as least recently used to determine which page to free.

Pinned pages

Operating systems have memory areas that are pinned (never swapped to secondary storage). Other terms used are lockedfixed, or wired pages. For example,interrupt mechanisms rely on an array of pointers to their handlers, such as I/O completion and page fault. If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions. Hence, some part of the page table structures is not pageable.
Some pages may be pinned for short periods of time, others may be pinned for long periods of time, and still others may need to be permanently pinned. For example:
  • The paging supervisor code and drivers for secondary storage devices on which pages reside must be permanently pinned, as otherwise paging wouldn't even work because the necessary code wouldn't be available.
  • Timing-dependent components may be pinned to avoid variable paging delays.
  • Data buffers that are accessed directly by peripheral devices that use direct memory access or I/O channels must reside in pinned pages while the I/O operation is in progress because such devices and the buses to which they are attached expect to find data buffers located at physical memory addresses; regardless of whether the bus has a memory management unit for I/O, transfers cannot be stopped if a page fault occurs and then restarted when the page fault has been processed.
In IBM's operating systems for System/370 and successor systems, the term is "fixed", and such pages may be long-term fixed, or may be short-term fixed, or may be unfixed (i.e., pageable). System control structures are often long-term fixed (measured in wall-clock time, i.e., time measured in seconds, rather than time measured in fractions of one second) whereas I/O buffers are usually short-term fixed (usually measured in significantly less than wall-clock time, possibly for tens of milliseconds). Indeed, the OS has a special facility for "fast fixing" these short-term fixed data buffers (fixing which is performed without resorting to a time-consuming Supervisor Call instruction).
Multics used the term "wired". OpenVMS and Windows refer to pages temporarily made nonpageable (as for I/O buffers) as "locked", and simply "nonpageable" for those that are never pageable.

Virtual-real operation

In OS/VS1 and similar OSes, some parts of systems memory are managed in "virtual-real" mode, called "V=R". In this mode every virtual address corresponds to the same real address. This mode is used for interrupt mechanisms, for the paging supervisor and page tables in older systems, and for application programs using non-standard I/O management. For example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and virtual-fixed).

Thrashing

When paging and page stealing are used, a problem called "thrashing" can occur, in which the computer spends an unsuitably large amount of time transferring pages to and from a backing store, hence slowing down useful work. A task's working set is the minimum set of pages that should be in memory in order for it to make useful progress. Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes.

Segmented virtual memory

Some systems, such as the Burroughs B5500 use segmentation instead of paging, dividing virtual address spaces into variable-length segments. A virtual address here consists of a segment number and an offset within the segment. The Intel 80286 supports a similar segmentation scheme as an option, but it is rarely used. Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such as Multics and IBM System/38, are usually paging-predominant, segmentation providing memory protection.
In the Intel 80386 and later IA-32 processors, the segments reside in a 32-bit linear, paged address space. Segments can be moved in and out of that space; pages there can "page" in and out of main memory, providing two levels of virtual memory; few if any operating systems do so, instead using only paging. Early non-hardware-assisted x86 virtualization solutions combined paging and segmentation because x86 paging offers only two protection domains whereas a VMM / guest OS / guest applications stack needs three.The difference between paging and segmentation systems is not only about memory division; segmentation is visible to user processes, as part of memory model semantics. Hence, instead of memory that looks like a single large space, it is structured into multiple spaces.
This difference has important consequences; a segment is not a page with variable length or a simple way to lengthen the address space. Segmentation that can provide a single-level memory model in which there is no differentiation between process memory and file system consists of only a list of segments (files) mapped into the process's potential address space.
This is not the same as the mechanisms provided by calls such as mmap and Win32's MapViewOfFile, because inter-file pointers do not work when mapping files into semi-arbitrary places. In Multics, a file (or a segment from a multi-segment file) is mapped into a segment in the address space, so files are always mapped at a segment boundary. A file's linkage section can contain pointers for which an attempt to load the pointer into a register or make an indirect reference through it causes a trap. The unresolved pointer contains an indication of the name of the segment to which the pointer refers and an offset within the segment; the handler for the trap maps the segment into the address space, puts the segment number into the pointer, changes the tag field in the pointer so that it no longer causes a trap, and returns to the code where the trap occurred, re-executing the instruction that caused the trap. This eliminates the need for a linker completely and works when different processes map the same file into different places in their private address spaces.

Address space swapping

Some operating systems provide for swapping entire address spaces, in addition to whatever facilities they have for paging and segmentation. When this occurs, the OS writes those pages and segments currently in real memory to swap files. In a swap-in, the OS reads back the data from the swap files but does not automatically read back pages that had been paged out at the time of the swap out operation.
IBM's MVS, from OS/VS2 Release 2 through z/OS, provides for marking an address space as unswappable; doing so does not pin any pages in the address space. This can be done for the duration of a job by entering the name of an eligible main program in the Program Properties Table with an unswappable flag. In addition, privileged code can temporarily make an address space unswappable With a SYSEVENT Supervisor Call instruction (SVC); certain changes in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP