New thought provoking information about the latest happenings in the Operating System, Distributed Operating Systems, Virtualization, Hardware and Software, Parallel Computations etc...
One of the latest developments in Linux The really difficult tasks of scheduling for the processes is being made into an artificially intelligent scheduling algorithm. This means that the OS itself will learn about the tasks that is should run/hold back instead of it being hard coded. This enables a better experience as the OS designs itself specially to the user. All this is possible due to Linux being an open source OS.
ProcessThreadsView is a small utility that displays extensive information about all threads of the process that you choose. The threads information includes the ThreadID, Context Switches Count, Priority, Created Time, User/Kernel Time, Number of Windows, Window Title, Start Address, and more. When selecting a thread in the upper pane, the lower pane displays the following information: Strings found in the stack, stack modules addresses, call stack, and processor registers. ProcessThreadsView also allows you to suspend and resume one or more threads.
Parallax is a distributed Os. Parallax, a new operating system, implements scalable, distributed, and parallel computing to take advantage of the new generation of 64-bit multi-core processors. Parallax uses the Distributed Intelligent Managed Element (DIME) network architecture, which incorporates a signaling network overlay and allows parallelism in resource configuration, monitoring, analysis and reconfiguration on-the-fly based on workload variations, business priorities and latency constraints of the distributed software components. A workflow is implemented as a set of tasks, arranged or organized in a directed acyclic graph (DAG) and executed by a managed network of DIMEs. These tasks, depending on user requirements are programmed and executed as loadable modules in each DIME.
Parallax is implemented using the assembler language at the lowest level for efficiency and provides a C/C++ programming API for higher level programming.
Set your various messaging apps to "manual" for the polling or refresh frequency, just as a test, and you'll instantly extend your device's battery life by a significant amount. Once you see what a difference that makes, try re-enabling just the most important ones, and possibly reducing their polling frequency in the process.
PC-MOS/386 has become the latest obsolete operating system to open source being developed.
PC-MOS/386 was first announced by The Software Link in 1986 and was released in early 1987. It was capable of working on any x86 computer (though the Intel 80386 was its target market). However, some later chips became incompatible because they didn't have the necessary memory management unit.
It had a dedicated following but also contained a couple of design flaws that made it slow and/or expensive to run. Add to that the fact it had a Y2K bug that manifested on 31 July 2012, after which any files created wouldn't work.
It's also very slow, even on modern machines. It is being reassembled for ARM so it could be tested on a Raspberry Pi.
TEAM MEMBERS: NIKITA BANGA 16BCE0099 SUMEDHA P UPADHYAY 16BCE0518 Our project focused on the brand new technology of fog computing. Are you tired of cloud? It is becoming slower in terms of latency, response time. Now fog computing is reducing these problems by providing processing and storage near the device its self. This makes the whole system faster. This could be used in areas of health or traffic where a fast network is required.
DIFFERENCE BETWEEN CLOUD COMPUTING AND DISTRIBUTED COMPUTING
CLOUD COMPUTING: The key to cloud computing in my mind is that I can look at computing resources like(CPU, memory, etc) a commodity rather than as capital. Traditionally, if I want to add some computing power to my organization, I need to go out and buy more computers, set them up, and maintain them. Cloud computing lets me grab extra computing power exactly when I need it, and then release it when I don't. Our cloud environment lets me add resources in seconds and then release them just as quickly when I don't need them. The solution we offer on top of that cloud computing infrastructure is Software-as-a-Service (SaaS). Things like GMail are SaaS, not cloud computing, in my view.
DISTRIBUTED COMPUTING Distributed computing just means I break up a problem so that I can have a whole bunch of computers work on it at the same time. Berkeley University's BOINC project is an excellent example of this (and please consider signing up for it). The computers involved in BOINC and other distributed projects can be people'slaptops, desktops, servers. They can be installed in my office, virtual servers leased from an ISP, or virtual servers that are part of a "cloud". It matters not one bit where the computers come from. If I can install the distributed computing software on a computer, it can be part of the distributed solution.
The fact that the data transmission between two devices or processes is accomplished through packets, according to the rules of TCP/IP protocol suite is well known. The mentioned transmission involves all layers of TCP/IP protocol suite. The different processes involved in the transmission follow the same rule of the TCP / IP suite either on the same device or on different devices. In our project, me and my partner are putting forward an algorithm through which the data is being transferred from one process to another process using the same file considering both processes are on the same device and not different. Our algorithm uses only the two top layers which are Application and Transport, of a TCP/IP protocol suite.
If source and destination are on the same device, then the use of all layers is a waste of time and resources. The optimal path for transferring packets is the direct connection between the transport layer of source process and transport layer of destination process. In our project, we have mentioned an algorithm, which transfers packets directly from the transport layer of source to transport layer of destination process, if both processes are on the same device. Thus we learnt that we can omit some layers of TCP/IP protocol suite and still transmit data that too effectively when source and destination are on the same device.
Padma Ma’am suggested us to use two instances of the same OS (Kali Linux, a flavour of the popular Debian distribution of Linux) running on different virtual machine instances on the same device to ping one another. We succeeded and in the process of doing so learnt how transmission on the same device using two instances of the same OS can occur.
The most ideal approach to clarify the distinction amongst virtualization and cloud computing is to state that the previous is an innovation, while the last is an administration whose establishment is shaped by said innovation. Virtualization can exist without the cloud, however cloud computing can't exist without virtualization in any event, not in its present organization. The term cloud computing at that point is best used to allude to a circumstance in which "shared computing assets, programming, or information are conveyed as an administration and on-request through the Web."
It’s easy to see where the confusion lies in telling the difference between cloud and virtualization technology. The fact that “the cloud” may well be the most overused buzzword since “web 2.0” notwithstanding; the two are remarkably similar in both form and function. What’s more, since they so often work together, it’s quite common for people to see clouds where there are none.
Saurabh Bhole 16BCE0937 As the sector of cloud computing is prospering. We could create Operating Systems that would work from the Cloud Server and would Simultaneously be in sync with the local OS. There are providers who provide you with online OS. For example : virtuallabs for VIT Rather than this you could use an OS that would store all its functional files in the Cloud, but still these files would be set up for a global environment. So you could use Mac OS or Linux in sync with your windows. This will make development, gaming , graphic designing etc easy for users that prefer using specialized OS for professional purposes. The challenge that will be faced would be the synchronization of processes between multiple Operating Systems. This is just an idea. Feedback will be appreciated !
A New Revolutionary Idea in Operating System Kernel Development, the KARL - Kernel Address Randomized Link, Developed by Theo de Raadt and deployed in OpenBSD, Generates a new Kernel Binary everytime the System is booted. The Main concept is to divide the kernel binary into various chunks and saving it as a 'template', a relocatable binary, which by using a randomly generated number, stores these chunks in various locations in RAM, thus generating random links between different pieces of codes. This provides great security advantage as a hacker who has determined any vulnerability in such an open source kernel still cant use it as he cannot determine where any specific piece of kernel code is stored without getting hold of any single part of the kernel. This feature has been adopted in Linux Kernel 4.12, replacing older ASLR and a similar tech exists for Windows 10.
Probably the most innovative and impact full technological advancement in the field of virtualization came in the form of Intel VT (and later Intel VT-x) Technologies. It introduced, in a way the CPU modes with higher privileges then the existing Kernel Mode (Ring 0). In x86, Beyond ring 0 lie the more privileged realms of execution, where our code is invisible to AV, we have unfettered access to hardware, and can trivially preempt and modify the OS. These modes, the Hypervisor and the System Management Mode, are often termed as the Ring -1 and the Ring -2 because of them being more powerful modes then the regular Ring 0, 1, 2, 3 etc. New sets of Instruction for virtualization made is possible for the preemption and scheduling of Entire Operating Systems, making it possible for a single cpu to run multiple OSs while Each OS views the hardware as being transparent. This level of virtualization proved to be ground breaking and paved the way for Faster and cheaper Cloud Computing Solutions and Servers which can be deployed on the go, in bulk.
One of the latest developments in Linux
ReplyDeleteThe really difficult tasks of scheduling for the processes is being made into an artificially intelligent scheduling algorithm. This means that the OS itself will learn about the tasks that is should run/hold back instead of it being hard coded. This enables a better experience as the OS designs itself specially to the user.
All this is possible due to Linux being an open source OS.
ProcessThreadsView is a small utility that displays extensive information about all threads of the process that you choose. The threads information includes the ThreadID, Context Switches Count, Priority, Created Time, User/Kernel Time, Number of Windows, Window Title, Start Address, and more. When selecting a thread in the upper pane, the lower pane displays the following information: Strings found in the stack, stack modules addresses, call stack, and processor registers. ProcessThreadsView also allows you to suspend and resume one or more threads.
ReplyDelete16BCE2047
ReplyDeleteSYED AFRAN AHMMAD
A1 Slot
Parallax is a distributed Os.
Parallax, a new operating system, implements scalable, distributed, and parallel computing to take advantage of the new generation of 64-bit multi-core processors. Parallax uses the Distributed Intelligent Managed Element (DIME) network architecture, which incorporates a signaling network overlay and allows parallelism in resource configuration, monitoring, analysis and reconfiguration on-the-fly based on workload variations, business priorities and latency constraints of the distributed software components. A workflow is implemented as a set of tasks, arranged or organized in a directed acyclic graph (DAG) and executed by a managed network of DIMEs. These tasks, depending on user requirements are programmed and executed as loadable modules in each DIME.
Parallax is implemented using the assembler language at the lowest level for efficiency and provides a C/C++ programming API for higher level programming.
Save battery life of ur mobile tips!
ReplyDeleteSet your various messaging apps to "manual" for the polling or refresh frequency, just as a test, and you'll instantly extend your device's battery life by a significant amount. Once you see what a difference that makes, try re-enabling just the most important ones, and possibly reducing their polling frequency in the process.
Akschat arya 16BCE0426
ReplyDeletePC-MOS/386 has become the latest obsolete operating system to open source being developed.
PC-MOS/386 was first announced by The Software Link in 1986 and was released in early 1987. It was capable of working on any x86 computer (though the Intel 80386 was its target market). However, some later chips became incompatible because they didn't have the necessary memory management unit.
It had a dedicated following but also contained a couple of design flaws that made it slow and/or expensive to run. Add to that the fact it had a Y2K bug that manifested on 31 July 2012, after which any files created wouldn't work.
It's also very slow, even on modern machines. It is being reassembled for ARM so it could be tested on a Raspberry Pi.
TEAM MEMBERS:
ReplyDeleteNIKITA BANGA 16BCE0099
SUMEDHA P UPADHYAY 16BCE0518
Our project focused on the brand new technology of fog computing. Are you tired of cloud? It is becoming slower in terms of latency, response time. Now fog computing is reducing these problems by providing processing and storage near the device its self. This makes the whole system faster. This could be used in areas of health or traffic where a fast network is required.
HARSH GROVER
ReplyDelete16BCE0924
SLOT-A1
DIFFERENCE BETWEEN CLOUD COMPUTING AND DISTRIBUTED COMPUTING
CLOUD COMPUTING:
The key to cloud computing in my mind is that I can look at computing resources like(CPU, memory, etc) a commodity rather than as capital.
Traditionally, if I want to add some computing power to my organization, I need to go out and buy more computers, set them up, and maintain them. Cloud computing lets me grab extra computing power exactly when I need it, and then release it when I don't. Our cloud environment lets me add resources in seconds and then release them just as quickly when I don't need them. The solution we offer on top of that cloud computing infrastructure is Software-as-a-Service (SaaS). Things like GMail are SaaS, not cloud computing, in my view.
DISTRIBUTED COMPUTING
Distributed computing just means I break up a problem so that I can have a whole bunch of computers work on it at the same time. Berkeley University's BOINC project is an excellent example of this (and please consider signing up for it). The computers involved in BOINC and other distributed projects can be people'slaptops, desktops, servers. They can be installed in my office, virtual servers leased from an ISP, or virtual servers that are part of a "cloud". It matters not one bit where the computers come from. If I can install the distributed computing software on a computer, it can be part of the distributed solution.
Registration number: 16BCE0191
ReplyDeleteName: Sarvesh Samvedi
Slot: A1
The fact that the data transmission between two devices or processes is accomplished through packets, according to the rules of TCP/IP protocol suite is well known. The mentioned transmission involves all layers of TCP/IP protocol suite. The different processes involved in the transmission follow the same rule of the TCP / IP suite either on the same device or on different devices. In our project, me and my partner are putting forward an algorithm through which the data is being transferred from one process to another process using the same file considering both processes are on the same device and not different. Our algorithm uses only the two top layers which are Application and Transport, of a TCP/IP protocol suite.
If source and destination are on the same device, then the use of all layers is a waste of time and resources. The optimal path for transferring packets is the direct connection between the transport layer of source process and transport layer of destination process. In our project, we have mentioned an algorithm, which transfers packets directly from the transport layer of source to transport layer of destination process, if both processes are on the same device.
Thus we learnt that we can omit some layers of TCP/IP protocol suite and still transmit data that too effectively when source and destination are on the same device.
Padma Ma’am suggested us to use two instances of the same OS (Kali Linux, a flavour of the popular Debian distribution of Linux) running on different virtual machine instances on the same device to ping one another. We succeeded and in the process of doing so learnt how transmission on the same device using two instances of the same OS can occur.
Cloud Computing Vs Virtualization
ReplyDeleteThe most ideal approach to clarify the distinction amongst virtualization and cloud computing is to state that the previous is an innovation, while the last is an administration whose establishment is shaped by said innovation. Virtualization can exist without the cloud, however cloud computing can't exist without virtualization in any event, not in its present organization. The term cloud computing at that point is best used to allude to a circumstance in which "shared computing assets, programming, or information are conveyed as an administration and on-request through the Web."
It’s easy to see where the confusion lies in telling the difference between cloud and virtualization technology. The fact that “the cloud” may well be the most overused buzzword since “web 2.0” notwithstanding; the two are remarkably similar in both form and function. What’s more, since they so often work together, it’s quite common for people to see clouds where there are none.
Saurabh Bhole
ReplyDelete16BCE0937
As the sector of cloud computing is prospering. We could create Operating Systems that would work from the Cloud Server and would Simultaneously be in sync with the local OS.
There are providers who provide you with online OS.
For example : virtuallabs for VIT
Rather than this you could use an OS that would store all its functional files in the Cloud, but still these files would be set up for a global environment.
So you could use Mac OS or Linux in sync with your windows.
This will make development, gaming , graphic designing etc easy for users that prefer using specialized OS for professional purposes.
The challenge that will be faced would be the synchronization of processes between multiple Operating Systems.
This is just an idea.
Feedback will be appreciated !
16BCE0787 Shruti Gupta
ReplyDeleteA New Revolutionary Idea in Operating System Kernel Development, the KARL - Kernel Address Randomized Link, Developed by Theo de Raadt and deployed in OpenBSD, Generates a new Kernel Binary everytime the System is booted. The Main concept is to divide the kernel binary into various chunks and saving it as a 'template', a relocatable binary, which by using a randomly generated number, stores these chunks in various locations in RAM, thus generating random links between different pieces of codes. This provides great security advantage as a hacker who has determined any vulnerability in such an open source kernel still cant use it as he cannot determine where any specific piece of kernel code is stored without getting hold of any single part of the kernel. This feature has been adopted in Linux Kernel 4.12, replacing older ASLR and a similar tech exists for Windows 10.
16BCE2074 Abhinav
ReplyDeleteProbably the most innovative and impact full technological advancement in the field of virtualization came in the form of Intel VT (and later Intel VT-x) Technologies. It introduced, in a way the CPU modes with higher privileges then the existing Kernel Mode (Ring 0). In x86, Beyond ring 0 lie the more privileged realms of execution, where our code is invisible to AV, we have unfettered access to hardware, and can trivially preempt and modify the OS. These modes, the Hypervisor and the System Management Mode, are often termed as the Ring -1 and the Ring -2 because of them being more powerful modes then the regular Ring 0, 1, 2, 3 etc.
New sets of Instruction for virtualization made is possible for the preemption and scheduling of Entire Operating Systems, making it possible for a single cpu to run multiple OSs while Each OS views the hardware as being transparent. This level of virtualization proved to be ground breaking and paved the way for Faster and cheaper Cloud Computing Solutions and Servers which can be deployed on the go, in bulk.