Distributed Real-Time Managed System: [Essay Example], 1913 words GradesFixer
exit-popup-close

Haven't found the right essay?

Get an expert to write you the one you need!

exit-popup-print

Professional writers and researchers

exit-popup-quotes

Sources and citation are provided

exit-popup-clock

3 hour delivery

exit-popup-persone
close
This essay has been submitted by a student. This is not an example of the work written by professional essay writers.

Distributed Real-time Managed System

Download Print

Pssst… we can write an original essay just for you.

Any subject. Any type of essay.

We’ll even meet a 3-hour deadline.

Get your price

121 writers online

blank-ico
Download PDF

The emerging realm of mobile and embedded cloud computing has supported the advancement made in computing and communication in mobile device and sensors ensuring a means of executing distributed, real-time and embedded (DRE) systems. These mobile devices are used as computing resources in space missions: satellite clusters gives an active environment for launching and controlling distributed mission applications e.g NASA’s Edison Demonstration of SmallSat Networks etc. Consider a cluster of satellites which runs software applications distributes across the satellites. The Cluster Flight Application (CFA) is an application which governs a satellite’s flight and should respond to emergency commands. Along with the CFA is the IPA (Image Processing Applications) that uses the satellites’ sensors along with CPU resources it has security privileges differs and has controlled access to sensor data. Sensitive data shouldn’t be shared with IPA’s rather it should be compartmentalized unless explicitly permitted. These applications should be isolated from each to prevent fault due to lifecycle changes. When these applications are dormant CPU resources shouldn’t be wasted due to isolation.

Temporal and Spatial partitioning of the process is a technique for implementing strict application isolation. Spatial separation provides a hardware-supported, physically separated memory address space for each process while Temporal partitioning gives a fixed interval cyclic repetition of CPU time. This portioning system is usually shaped with a steady schedule an alteration in the schedule would require a system reboot.

Distributed Real-Time Managed System (DREAMS) structure was made to address such needs.

Mixed-Criticality System and Partitioning Operating System are the two inspiring domains for their approach. The mixed-criticality computing system has a single shared hardware platform with multiple criticality levels the distinct levels are inspired by safety alongside security concerns. Criticality levels directly impact the task parameters especially the worst-case execution time (WCET) argued Vestal, from his framework, every task has a maximum criticality level and a steady (WCET) for continuous decreasing levels. Levels greater than the maximum task, its excluded from the analyzed set of tasks. Increasing criticality levels result in a more conservative verification process. Vestal extended the response time analysis of fixed priority scheduling to mixed criticality task set this result were improved by Baruah et al. The implementation was proposed for fixed priority single processor scheduling of mixed-criticality tasks with optimal priority and response time analysis. Partitioning operating system has been applied to avionics, automotive, cross-industry domains. They provide application shared access to critical system resources on an integrated computing platform. Different security domains own these applications and have divergent safety-critical influences on the system. Unwanted interference between applications an authentic protection for both spatial and temporal domain is ensured and achieved by using partitioning on the system level. Spatial partitioning ensures privacy amongst applications on memory devices and Temporal partitioning guarantees accessibility to CPU resources for applications.

Dreams architecture:

Close DREAMS [9], [2], [11] is a distributed system architecture that consists of one or more computing nodes grouped into a cluster. It is conceptually similar to the recent Fog Computing Architecture [12].

A) Partitioning Support:

Close The system guarantees spatial isolation between actors by (a) providing a separate address space for each actor; (b) enforcing that an I/O device can be accessed by only one actor at a time, and (c) facilitating temporal isolation between processes by the scheduler. Spatial isolation is implemented by the Memory Management Unit of the CPU, while temporal isolation is provided via ARINC-653 style temporal partitions, implemented in the OS scheduler.

B) Criticality Levels Supported by the DREAMS OS Scheduler:

The DREAMS OS scheduler can manage CPU’s time for tasks on three different criticality levels: Critical, Application and Best Effort. Critical tasks provide kernel-level services and system management services. These tasks will be scheduled based on their priority whenever they are ready.

C) Multiple Partitions:

To support the different levels of criticality, we extend the run queue data structure of the Linux kernel. A run queue maintains a list of tasks eligible for scheduling. In a multicore system, this structure is replicated per CPU. In a fully preemptive mode, the scheduling decision is made by evaluating which task should be executed next on a CPU when an interrupt handler exits, when a system call returns, or when the scheduler function is explicitly invoked to preempt the current process.

D) CPU Cap and Work Conserving behavior:

The schedulability of the Application level tasks is constrained by the current load coming from the Critical tasks and the temporal partitioning used on the Application level. Should the load of the Critical tasks exceed a threshold the system will not be able to schedule tasks on the Application level. A formal analysis of the response time of the Application level tasks will not be provided in this paper, however, we present a description of the method we will use to address the analysis which will build on available results from The submitted load function determines the maximum load submitted to a partition by the task itself after its release together with all higher priority tasks belonging to the same partition. In DREAMS OS, the CPU cap can be applied to tasks on the Critical and Application level to provide scheduling fairness within a partition or hyperperiod. The CPU cap is enforced in a work-conserving manner, i.e., if a task has reached its CPU cap but there are no other available tasks, the scheduler will continue scheduling the task past its ceiling. In case of Critical tasks, when the CPU cap is reached, the task is not marked ready for execution unless (a) there is no other ready task in the system; or (b) the CPU cap accounting is reset. This behavior ensures that the kernel tasks, such as those belonging to network communication, do not overload the system, for example in a denial-of-service attack. For the tasks on the Application level, the CPU cap is specified as a percentage of the total duration of the partition, the number of major frames and the number of CPU cores available all multiplied together. When an Application task reaches the CPU cap, it is not eligible to be scheduled again unless the following is true: either (a) there are no Critical tasks to schedule and there are no other ready tasks in the partition, or (b) the CPU cap accounting has been reset.

E) Dynamic Major Frame Configuration:

During the configuration process that can be repeated at any time without rebooting the node, the kernel receives the major frame structure that contains a list of minor frames and it also contains the length of the hyper period, partition periodicity, and duration. Note that major frame reconfiguration can only be performed by an actor with suitable capabilities. More details on the DREAMS capability model can be found in [9]. Before the frames are set up, the process configuring the frame has to ensure that the following three constraints are met: (C0) The hyper period must be the least common multiple of partition periods; (C1) The offset between the major frame start and the first minor frame of a partition must be less than or equal to the partition period: (?pP)(O1p = f(p)); (C2) Time between any two executions should be equal to the partition period: (?p ? P)(k ? [1, N (p) – 1])(Op = p k 1 Ok f(p)), where P is the set of all partitions, N (p) is the number of partitions, f(p) is the period of partition p and ?(p) is the duration of the partition p.

F) Main Scheduling Loop:

A periodic tick running at 250 Hz1 is used to ensure that a scheduling decision is triggered at least every 4 ms. This tick runs with the base clock of CPU0 and executes a procedure called Global tick in the interrupt context only on CPU0. After the global tick handles the partition switching, the function to get the next runnable task is invoked. This function combines the mixed criticality scheduling with the temporal partition scheduling. For mixed criticality scheduling, the Critical system tasks should preempt the Application tasks, which themselves should preempt the Best Effort tasks. This policy is implemented by Pick_Next_Task subroutine, which is called first for the system partition. Only if there are no runnable Critical system tasks and the scheduler state is not inactive, i.e. the application partitions are allowed to run2, will Pick_Next_Task be called for the Application tasks. Thus, the scheduler does not schedule any Application tasks during a major frame reconfiguration. Similarly, Pick_Next_Task will only be called for the Best Effort tasks if there are both no runnable Critical tasks and no runnable Application tasks.

G) Pick Next Task and CPU Cap:

The Pick_Next_Task function returns either the highest priority task from the current temporal partition (or the system partition, as an application) or an empty list if there are no runnable tasks. If the CPU cap is disabled, the Pick_Next_Task algorithm returns the first task from the specified run queue. For the best effort class, the default algorithm for the Completely Fair Scheduler policy in the Linux Kernel is used. If the CPU cap is enabled, the Pick_Next_Task algorithm iterates through the task list at the highest priority index of the run queue, because unlike the Linux scheduler, the tasks may have had their disabled bit set by the scheduler if it had enforced their CPU cap.

Experiment: A 3-Node Satellite Cluster:

To demonstrate the DREMS platform, a multi-computing node experiment was created on a cluster of fanless computing nodes with a 1.6 GHz Intel Atom N270 processor and 1 GB of RAM each. On these nodes, a cluster of three satellites was emulated and each satellite ran the example applications described in Section I. Because the performance of the cluster flight control application is of interest, we explain the interac- tions between its actors below. The mission-critical cluster flight application (CFA) (Figure 5) consists of four actors: OrbitalMaintenance, Trajectory- Planning, CommandProxy, and ModuleProxy. ModuleProxy SCENARIO 1 0.08 Hyperperperiod = 250 ms Application code utilization < 100 % Sat 1 Latency : (? =37.2,?2 =0.19) ms 0.07 Sat 2 Latency : (? =34.6, ?2 =0.18) ms Sat 3 Latency : (? =33.9, ?2 =0.18) ms 0.06 0.05 0.04 0.03 0.02 0.01 0.00 SCENARIO 2 Hyperperperiod = 250 ms Application code utilization = 100 % Sat 1 Latency : (? =39.1, ?2 =0.14) ms Sat 2 Latency : (? =37.9, ?2 =0.16) ms Sat 3 Latency : (? =37.4, ?2 =0.16) ms SCENARIO 3 Hyperperperiod = 100 ms Application code utilization = 100 % Sat 1 Latency : (? =36.3, ?2 =0.14) ms Sat 2 Latency : (? =36.5, ?2 =0.14) ms Sat 3 Latency : (? =36.5, ?2 =0.14) ms Satellite 1 Satellite 2 Satellite 3 Cluster Emergency Response Latency (a) This is the time between reception of the scatter command by satellite 1 and the activation of the thrusters on each satellite, corresponding to interactions CommandProxy to ModuleProxy.

This paper propounds the notion of managed distributed real-time and embedded (DRE) systems that are deployed in mobile computing environments. To that end, we described the design and implementation of a distributed operating system called DREAMS OS focusing on a key mechanism: the scheduler. We have verified the behavioral properties of the OS scheduler, focusing on temporal and spatial process isolation, safe operation with mixed criticality, precise control of process CPU utilization and dynamic partition schedule reconfiguration.

infoRemember: This is just a sample from a fellow student.

Your time is important. Let us write you an essay from scratch

100% plagiarism-free

Sources and citations are provided

Find Free Essays

We provide you with original essay samples, perfect formatting and styling

Cite this Essay

To export a reference to this article please select a referencing style below:

Distributed Real-Time Managed System. (2018, October 23). GradesFixer. Retrieved February 25, 2021, from https://gradesfixer.com/free-essay-examples/distributed-real-time-managed-system/
“Distributed Real-Time Managed System.” GradesFixer, 23 Oct. 2018, gradesfixer.com/free-essay-examples/distributed-real-time-managed-system/
Distributed Real-Time Managed System. [online]. Available at: <https://gradesfixer.com/free-essay-examples/distributed-real-time-managed-system/> [Accessed 25 Feb. 2021].
Distributed Real-Time Managed System [Internet]. GradesFixer. 2018 Oct 23 [cited 2021 Feb 25]. Available from: https://gradesfixer.com/free-essay-examples/distributed-real-time-managed-system/
copy to clipboard
close

Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you.

    By clicking “Send”, you agree to our Terms of service and Privacy statement. We will occasionally send you account related emails.

    close

    Attention! this essay is not unique. You can get 100% plagiarism FREE essay in 30sec

    Recieve 100% plagiarism-Free paper just for 4.99$ on email
    get unique paper
    *Public papers are open and may contain not unique content
    download public sample
    close

    Sorry, we cannot unicalize this essay. You can order Unique paper and our professionals Rewrite it for you

    close

    Thanks!

    Your essay sample has been sent.

    Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.

    thanks-icon Order now
    boy

    Hi there!

    Are you interested in getting a customized paper?

    Check it out!
    Having trouble finding the perfect essay? We’ve got you covered. Hire a writer

    GradesFixer.com uses cookies. By continuing we’ll assume you board with our cookie policy.