Wednesday, October 12

IT Basics (260) - Autumn 2022 - Assignment 1

IT Basics (260)

Q.No.1 Define assembly language. How assembly language differs from machine language? Write the mnemonics for addition, subtraction, multiplication and division used in assemmbly language. (20)

Machine language is the low level programming language. Machine language can only be represented by 0s and 1s. In earlier when we have to create a picture or show data on the screen of the computer then it is very difficult to draw using only binary digits(0s and 1s). For example: To write 120 in the computer system its representation is 1111000. So it is very difficult to learn. To overcome this problem the assembly language is invented.

Dear Student,

Ye sample assignment h. Ye bilkul copy paste h jo dusre student k pass b available h. Agr ap ne university assignment send krni h to UNIQUE assignment hasil krne k lye ham c contact kren:

0313-6483019

0334-6483019

0343-6244948

University c related har news c update rehne k lye hamra channel subscribe kren:

AIOU Hub

 

Assembly language is the more than low level and less than high-level language so it is intermediary language. Assembly languages use numbers, symbols, and abbreviations instead of 0s and 1s.For example: For addition, subtraction and multiplications it uses symbols likes Add, sub and Mul, etc.  Below is a table of differences between Machine Language and Assembly Language: 

Assembly and machine language.Difference-table { border-collapse: collapse; width: 100%; } .Difference-table td { text-color: black !important; border: 1px solid #5fb962; text-align: left !important; padding: 8px; } .Difference-table th { border: 1px solid #5fb962; padding: 8px; } .Difference-table tr>th{ background-color: #c6ebd9; vertical-align: middle; } .Difference-table tr:nth-child(odd) { background-color: #ffffff; } 

Machine Language

Assembly Language

Machine language is only understand by the computers.

Assembly language is only understand by human beings not by the computers.

In machine language data only represented with the help of binary format(0s and 1s), hexadecimal and octadecimal.

In assembly language data can be represented with the help of mnemonics such as Mov, Add, Sub, End etc.

Machine language is very difficult to understand by the human beings.

Assembly language is easy to understand by the human being as compare to machine language.

Modifications and error fixing cannot be done in machine language.

Modifications and error fixing can be done in assembly language.

Machine language is very difficult to memorize so it is not possible to learn the machine language.

Easy to memorize the assembly language because some alphabets and mnemonics are used.

Execution is fast in machine language because all data is already present in binary format.

Execution is slow as compared to machine language.

There is no need of translator.The machine understandable form is the machine language.

Assembler is used as translator to convert mnemonics into machine understandable form.

Machine language is hardware dependent.

Assembly language is the machine dependent and it is not portable.

 

Q.No.2 Differentiate between highl level languages and low level languages. List down at least five highl level languages and also explain which one is best and why.

Both High level language and low level language are the programming languages’s types.

The main difference between high level language and low level language is that, Programmers can easily understand or interpret or compile the high level language in comparison of machine. On the other hand, Machine can easily understand the low level language in comparison of human beings.

Examples of high level languages are CC++JavaPython, etc.

Let’s see the difference between high level and low level languages:

S.NO

High Level Language

Low Level Language

1.

It is programmer friendly language.

It is a machine friendly language.

2.

High level language is less memory efficient.

Low level language is high memory efficient.

3.

It is easy to understand.

It is tough to understand.

4.

It is simple to debug.

It is complex to debug comparatively.

5.

It is simple to maintain.

It is complex to maintain comparatively.

6.

It is portable.

It is non-portable.

7.

It can run on any platform.

It is machine-dependent.

8.

It needs compiler or interpreter for translation.

It needs assembler for translation.

9.

It is used widely for programming.

It is not commonly used now-a-days in programming.

 

Q.No.3 Explain the following: (20)

a. Disk drivers

A disk drive is a technology that enables the reading, writing, deleting and modifying of data on a computer storage disk. It is either a built-in or external component of a disk that manages the disk's input/output (I/O) operations.

A disk partition in a hard disk is also known as a disk drive, such as drive C and drive D, etc.

A disk drive is one of the most important computer components that helps users store and retrieve data from a disk. The nature and type of disk drive vary on par with the underlying disk.

For example, a hard disk, which is known as a hard disk drive (HDD), is generally embedded within the disk itself. For floppy disks, an external component is installed within a computer and performs read/write (R/W) operations when a floppy disk is inserted.

A disk drive is a physical drive in a computer capable of holding and retrieving information. Below is a list of all the different types of computer disk drives.

b. Scanners

A scanner is a device that captures images from photographic prints, posters, magazine pages and similar sources for computer editing and display.

Scanners work by converting the image on the document into digital information that can be stored on a computer through optical character recognition (OCR).

This process is done by a scanning head, which uses one or more sensors to capture the image as light or electrical charges.

The document scanner moves either the physical document or the scanning head, depending on the type of scanner. Then, the scanner processes the scanned image and produces a digital image that can be stored on a computer.

Scanners usually attach to a computer system and come with scanning software applications that let you resize and otherwise modify a captured image.

If a printer is hooked up to the computer, you could print a second hard copy of the scanned image and store it in digital format.

What types of scanners are available?

Modern scanners come in handheld, feed-in and flatbed types and are for scanning black-and-white only or color.

Flatbed scanners are the most common type of scanner. They are called "flatbed" because the document is placed on a flat surface for scanning. Flatbed scanners can scan documents of various sizes and are generally more versatile than sheetfed scanners.

Sheetfed scanners are designed to scan documents fed into the scanner one at a time. Scanners with automatic document feeders are smaller and more portable than flatbed scanners and are often used in home offices or small businesses.

Handheld scanners are portable scanners that are smaller than flatbed scanners. They are designed for scanning documents on the go, such as newspaper articles or printed photos.

3D scanners are a bit different than traditional scanners in that they collect distance point measurements from a real-world object and translate them into a virtual 3D object.

However, it's also worth noting that scanners are embedded in other devices such as photocopiers, barcode scanners and fax machines used to make copies of documents and images.

In addition to the intended purpose of a scanner, a buyer will also need to know the type of image resolution they need from the scanner.

printer and scanner combination, multifunction device

Printer with built-in scanner

What is scanner resolution?

Image resolution refers to the number of pixels captured by the scanner sensor and is measured in dots per inch (dpi). The higher the dpi, the greater the scanner's ability to capture detail.

For example, a scanner with a resolution of 1200 dpi can capture 1200 pixels per inch of an image.

Very high-resolution image scanners are used for scanning for high-resolution printing, but lower-resolution scanners are adequate for capturing high-quality images for computer display.

The scanner's resolution is determined by the number of sensors in the scanning head.

Manufacturers of scanners

Some major manufacturers of scanners include Epson, Hewlett Packard, Microtek and Relisys. These companies offer a variety of scanner types and resolutions from which to choose.

For example, Epson offers flatbed and handheld scanners, with resolutions ranging from 300 dpi to 12800 dpi.

Hewlett Packard offers sheetfed and flatbed scanners, with resolutions ranging from 1200 dpi to 4800 dpi.

Microtek manufactures only flatbed scanners, with resolutions ranging from 1200 dpi to 3200 dpi.

Relisys manufactures both sheetfed and flatbed scanners, with resolutions up to 2400 dpi.

When choosing a scanner, consider the type of scanner you need as well as the image resolution you require. Pricing varies depending on the brand, type, resolution and whether it is intended for personal or business use.

Decide on a budget first, then compare the features offered by different manufacturers to find the scanner that best meets your needs.

Q.No.4 Elaborate input and output devices with at least three examples of each. Also tell which type of device is a mouse?

An input device is any device that allows you to enter data into a computer and interact with it. Common input devices include keyboards, computer mice, touchpads and touchscreens. You also learned about the basics of digital cameras, scanners and readers such as radio-frequency identification (RFID), magnetic strip and OCR readers. Other input devices are video and audio input devices such as webcams and microphones, and biometric input devices such as fingerprint scanners.

Output devices take the processed input from a computer and display it in a way that is easy for humans to understand. Screens are the main output devices of any computer. Liquid crystal displays (LCDs) and LED screens are the most popular types. Printers are another common type of output device. There are two main printer types, namely inkjet and laser printers.

Headsets and speakers are designed for audio output, with other output devices being fax machines, multifunction devices (which combine faxing, emailing and printing) and data projectors.

Processing components include hardware such as the:

Motherboard, which connects the components in a computer and houses the ports, such as the universal serial bus (USB), video graphics array (VGA) and high-definition multimedia interface (HDMI) ports to connect input and output devices.

Central processing unit (CPU), which receives and carries out the instructions inputted by the user.

Graphics processing unit (GPU), which makes the calculations and follows the instructions necessary to display images on a screen.

Storage devices are the computer components designed to keep (or store) data. This data can be the information needed to make the computer function, such as the operating system or basic input/output system (BIOS), or data created by the user, such as images, documents, text files and so on.

These components, called storage media or storage devices, are any piece of computing hardware used to keep or store data files. They can hold and store information permanently or temporarily and can be internal or external.

Internal storage media, such as hard drives and RAM, are inside a computer and part of it, while external hard drives and USB drives are outside a computer and can be removed easily and quickly.

2.1   Input devices

As you learned in Chapter 1, a computer works using the information processing cycle. Input devices are the key components of the first stage of the cycle, the input stage. Input devices are what we use to interact with a computer. These can be things such as keyboards and computer mice, touchpads and scanners. The combination of keyboard and mouse used to be the most common input device, but the rise of the smartphone has made the touchscreen the most popular and common input device in the modern age.

There has also been a rise in the use of alternative input devices, such as fingerprint and face recognition to unlock your smartphone, and speech-to-type devices that are used by people with physical challenges.

There are a number of input devices that you can use with computers. Table 2.1 lists these devices, their uses, and their advantages and disadvantages.

WHAT DETERMINES THE QUALITY OF THE IMAGE TAKEN BY A SCANNER OR CAMERA?

There are three main factors that determine the quality of the image taken by a scanner. These are:

1. Colour depth

2. Resolution

3. Dynamic range

Colour depth is also known as bit depth and refers to the number of bits used to indicate the colours of a single pixel. The higher the bit number, the better the colour depth. You can see this in Figure

The image on the left is in 32-bit colour while the image on the right is in 8-bit colour. In the image on the left, the details in the background are sharper and the colour of the leaf is deeper and more vibrant compared to the image on the right.

Resolution is the amount of detail an image can hold and it is measured in pixels per inch (ppi) or dots per inch (dpi). These measurements show you how many dots or pixels are in a one-inch square (an inch is about 2,5 cm). The higher the ppi or dpi, the more information there is in the square. This means that the image will be of higher quality.

The final quality factor is the dynamic range. This measures the range of light the scanner can read and use to produce a range of tones and colours.

Camera quality is determined by three factors:

1. Resolution

2. Lens aperture

3. Focal length

Resolution is the amount of detail that a camera can capture. In digital cameras, resolution is measured in megapixels.

The lens aperture is the maximum amount that the lens can open. The wider it opens, the more light it can take in, which means that you need less light to take a good picture.

How much a camera can zoom is determined by its focal length. The focal length is shown by a number and the times symbol (×). A zoom of 3× means that the longest focal length is 3× the distance of the shortest focal length.

Q.No.5 How many types of memories are there in a computer? Also name and explain which computer memory is static and volatile. (20)

Memory is the electronic holding place for the instructions and data a computer needs to reach quickly. It's where information is stored for immediate use. Memory is one of the basic functions of a computer, because without it, a computer would not be able to function properly. Memory is also used by a computer's operating system, hardware and software.

There are technically two types of computer memory: primary and secondary. The term memory is used as a synonym for primary memory or as an abbreviation for a specific type of primary memory called random access memory (RAM). This type of memory is located on microchips that are physically close to a computer's microprocessor.

If a computer's central processer (CPU) had to only use a secondary storage device, computers would become much slower. In general, the more memory (primary memory) a computing device has, the less frequently the computer must access instructions and data from slower (secondary) forms of storage.

How primary, secondary and cache memory relate to each other

This image shows how primary, secondary and cache memory relate to each other in terms of size and speed.

Memory vs. storage

The concept of memory and storage can be easily conflated as the same concept; however, there are some distinct and important differences. Put succinctly, memory is primary memory, while storage is secondary memory. Memory refers to the location of short-term data, while storage refers to the location of data stored on a long-term basis.

Memory is most often referred to as the primary storage on a computer, such as RAM. Memory is also where information is processed. It enables users to access data that is stored for a short time. The data is only stored for a short time because primary memory is volatile, meaning it isn't retained when the computer is turned off.

The term storage refers to secondary memory and is where data in a computer is kept. An example of storage is a hard drive or a hard disk drive (HDD). Storage is nonvolatile, meaning the information is still there after the computer is turned off and then back on. A running program may be in a computer's primary memory when in use -- for fast retrieval of information -- but when that program is closed, it resides in secondary memory or storage.

How much space is available in memory and storage differs as well. In general, a computer will have more storage space than memory. For example, a laptop may have 8 GB of RAM while having 250 GB of storage. The difference in space is there because a computer will not need fast access to all the information stored on it at once, so allocating approximately 8 GB of space to run programs will suffice.

The terms memory and storage can be confusing because their usage today is not always consistent. For example, RAM can be referred to as primary storage -- and types of secondary storage can include flash memory. To avoid confusion, it can be easier to talk about memory in terms of whether it is volatile or nonvolatile -- and storage in terms of whether it is primary or secondary.

How does computer memory work?

When a program is open, it is loaded from secondary memory to primary memory. Because there are different types of memory and storage, an example of this could be a program being moved from a solid-state drive (SSD) to RAM. Because primary storage is accessed faster, the opened program will be able to communicate with the computer's processor at quicker speeds. The primary memory can be accessed immediately from temporary memory slots or other storage locations.

Memory is volatile, which means that data in memory is stored temporarily. Once a computing device is turned off, data stored in volatile memory will automatically be deleted. When a file is saved, it will be sent to secondary memory for storage.

There are multiple types of memory available to a computer. It will operate differently depending on the type of primary memory used, but in general, semiconductor-based memory is most associated with memory. Semiconductor memory will be made of integrated circuits with silicon-based metal-oxide-semiconductor (MOS) transistors.

Types of computer memory

In general, memory can be divided into primary and secondary memory; moreover, there are numerous types of memory when discussing just primary memory. Some types of primary memory include the following

Cache memory. This temporary storage area, known as a cache, is more readily available to the processor than the computer's main memory source. It is also called CPU memory because it is typically integrated directly into the CPU chip or placed on a separate chip with a bus interconnect with the CPU.

RAM. The term is based on the fact that any storage location can be accessed directly by the processor.

Dynamic RAM. DRAM is a type of semiconductor memory that is typically used by the data or program code needed by a computer processor to function.

Static RAM. SRAM retains data bits in its memory for as long as power is supplied to it. Unlike DRAM, which stores bits in cells consisting of a capacitor and a transistor, SRAM does not have to be periodically refreshed.

Double Data Rate SDRAM. DDR SRAM is SDRAM that can theoretically improve memory clock speed to at least 200 MHz.

Double Data Rate 4 Synchronous Dynamic RAM. DDR4 RAM is a type of DRAM that has a high-bandwidth interface and is the successor to its previous DDR2 and DDR3 versions. DDR4 RAM allows for lower voltage requirements and higher module density. It is coupled with higher data rate transfer speeds and allows for dual in-line memory modules (DIMMS) up to 64 GB.

Rambus Dynamic RAM. DRDRAM is a memory subsystem that promised to transfer up to 1.6 billion bytes per second. The subsystem consists of RAM, the RAM controller, the bus that connects RAM to the microprocessor and devices in the computer that use it.

Read-only memory. ROM is a type of computer storage containing nonvolatile, permanent data that, normally, can only be read and not written to. ROM contains the programming that enables a computer to start up or regenerate each time it is turned on.

Programmable ROM. PROM is ROM that can be modified once by a user. It enables a user to tailor a microcode program using a special machine called a PROM programmer.

Erasable PROM. EPROM is programmable read-only memory PROM that can be erased and re-used. Erasure is caused by shining an intense ultraviolet light through a window designed into the memory chip.

Electrically erasable PROM. EEPROM is a user-modifiable ROM that can be erased and reprogrammed repeatedly through the application of higher than normal electrical voltage. Unlike EPROM chips, EEPROMs do not need to be removed from the computer to be modified. However, an EEPROM chip must be erased and reprogrammed in its entirety, not selectively.

Virtual memory. A memory management technique where secondary memory can be used as if it were a part of the main memory. Virtual memory uses hardware and software to enable a computer to compensate for physical memory shortages by temporarily transferring data from RAM to disk storage.

Timeline of the history and evolution of computer memory

In the early 1940s, memory was only available up to a few bytes of space. One of the more significant signs of progress during this time was the invention of acoustic delay line memory. This technology enabled delay lines to store bits as sound waves in mercury, and quartz crystals to act as transducers to read and write bits. This process could store a few hundred thousand bits. In the late 1940s, nonvolatile memory began to be researched, and magnetic-core memory -- which enabled the recall of memory after a loss of power -- was created. By the 1950s, this technology had been improved and commercialized and led to the invention of PROM in 1956. Magnetic-core memory became so widespread that it was the main form of memory until the 1960s.

 

Metal-oxide-semiconductor field-effect transistors, also known as MOS semiconductor memory, was invented in 1959. This enabled the use of MOS transistors as elements for memory cell storage. MOS memory was cheaper and needed less power compared to magnetic-core memory. Bipolar memory, which used bipolar transistors, started being used in the early 1960s.

In 1961, Bob Norman proposed the concept of solid-state memory being used on an integrated circuit (IC) chip. IBM brought memory into the mainstream in 1965. However, users found solid-state memory to be too expensive to use at the time compared to other memory types. Other advancements during the early to mid-1960s were the invention of bipolar SRAM, Toshiba's introduction of DRAM in 1965 and the commercial use of SRAM in 1965. The single-transistor DRAM cell was developed in 1966, followed by a MOS semiconductor device used to create ROM in 1967. From 1968 to the early 1970s, N-type MOS (NMOS) memory also started to become popularized.

By the early 1970s, MOS-based memory started becoming much more widely used as a form of memory. In 1970, Intel had the first commercial DRAM IC chip. One year later, erasable PROM was developed and EEPROM was invented in 1972.

Dear Student,

Ye sample assignment h. Ye bilkul copy paste h jo dusre student k pass b available h. Agr ap ne university assignment send krni h to UNIQUE assignment hasil krne k lye ham c contact kren:

0313-6483019

0334-6483019

0343-6244948

University c related har news c update rehne k lye hamra channel subscribe kren:

AIOU Hub