close
test_template

Distributed Shared Memory Systems for Multi-threaded Networks

About this sample

About this sample

close

Words: 1450 |

Pages: 3|

8 min read

Published: Jul 15, 2020

Words: 1450|Pages: 3|8 min read

Published: Jul 15, 2020

Table of contents

  1. Introduction
  2. A Deep Focus on DSM Systems
  3. Findings and Literature Review
  4. Conclusion and future work

In distributed systems, data sharing is supported by the Distributed Shared Memory (DSM) systems for multiple process networks. This paper mainly concentrates on the Multi-Threaded DSM systems that support data communication and computational synchronization in multi-threaded systems. Also, the distributed shared memory system reconsiders the issues in the designing of a general thread system like load balance, computational synchronization, thread management. Multi-Threaded systems combine the threads together and share across various computers and perform the parallel execution. The shared data universally gets adjusted and allocated to different thread collections, according to their access patterns. The idea of Multi-Threading shows that it has better performance outcome and it is more effective than the Single-Threaded networks as the network latency gets masked by the communication and the computation overlap.

'Why Violent Video Games Shouldn't Be Banned'?

Introduction

With the advancements in the Distributed Shared Memory systems, they have become an alternative to the parallel computing. Multi-threading has emerged as a promising and effective opening in tolerating memory latency. The process of implementation is majorly distinguished into two types that are used to share memory units among the processors. The first approach is the hardware approach which uses shared memory machines; another method is the software approach that provides a virtual shared memory impression through a middleware. Shared memory is measured as a simple yet efficient parallel programming model; Shared memory is a widely accepted model to build parallel applications. The main advantage is to provide the programmer with a proper communication paradigm: however, with the research made through years’, it is shown that it is challenging to deliver shared memory illusion to large-scale systems. Though the hardware approach through cache coherence has proved its efficiency, regarding cost, it is not scalable. On the other hand, shared virtual memory is a cost-effective method to provide the shared memory abstraction on a computer network with minor processing overhead.

To the greatest extent, Distributed Shared Memory (DSM) systems and the memory coherence protocols are employed to support multi-process computing. These processes do not have common virtual address spaces, and they are given to different computers. Several new problems appear while extending this model to multi-threaded cases. Firstly, the default state of multi-threaded programs is virtually shared address space. With machines which are separated physically, the address space and code segments will be duplicated. However, these global variables need to be shared both locally and remotely in data segments. As the Virtual Memory Management (VMM) deal with pages in operating systems as an alternative of data items, access patterns and variable locations may acquire high communication frequency and volume. Secondly, multi-threaded programs use mutex locks for accessing critical sections. In distributed systems, locks are not shared by these threads any longer. As a result, the Traditional lock mechanism will not work. Lastly, most of the current consistency protocols in the existing shared virtual memory systems such as Overlapped Homebased LRC, Tread Marks System and Home-based LRC need a clear relationship between locks and the data in advance. However, it can be hard to obtain such information by compilers particularly if the accessing datum is pointed towards a pointer. Generally, programmers need to take care of these processes manually. The following contributions are made in this paper: they are, Locality-based data distribution - Memory block for global variables are restructured while the data segments are replicated and later dispatched to other different locations based on the pattern that is accessed between threads and data. The host of specific thread bundle reduces communication frequency and volume of the data segments.

A Deep Focus on DSM Systems

The DSM system allows processes in adopting a globally virtual shared memory. DSM software brings abstraction for globally shared-memory, that permits a processor in accessing any data item, and also programmers need not worry about how and where to get the data. For the programs with sophisticated parallelization strategies and composite data structures, it is a huge task. The programmer can make emphasis on algorithmic development more than communication handling and data management with the DSM systems. In addition to the programming ease, the DSM also provides the programming environment as in hardware Distributed Shared Memory systems, termed multiprocessors. These processors which are developed in DSM can adopt programs quickly. The program that is ported from hardware shared-memory multiprocessor to a DSM system needs program changes. A software DSM system with a higher latency makes the locality problem of memory access more critical. However, In both the environments, programmers can use the same algorithm design.

The DSM systems offer the virtual shared memory midst physically non-shared memory units. Mostly such DSM systems opt to replicate data because it gives the best performance for a broad range of application parameters. The facility of memory consistency with the copied data is the core of a DSM system as the DSM software should control the replication such that it provides a single image of shared memory. Traditional DSMs have addressed them adequately for the multi-process systems. The “Multiple-Writer” problem needs to be handled as for the same page, and at the same time, various threads may need to change different data items. Most of the DSM systems and hardware cache coherence use single-writer protocols. These protocols allow several readers to access a given page concurrently. However, only one writer will have the exclusive access to a page that plays as a crucial section for the modifications. The “Multiple-Writer” could be resolved by the home-based DSM if the relationship between the date and computations are well defined.

Findings and Literature Review

For Thread Operations: The main focus of DSM is to handle communications between the different distributed processes that are across multiple computers. The thread operations when used as holders for various threads on the hosts, the data sharing gets tough, and the thread synchronization becomes complicated especially when these threads get split from a large thread group from the original system. The threads should be arranged according to the data items and made into bundles according to the synchronization pattern and their access.

Threads provide great concurrency within a process and enable usage of multiprocessor designs to noticeable scale and effectiveness. They move around the linked data with them, but due to the local data access, the communication overhead gets reduced, and hence the Multi-Threaded DSM resolves this and few more issues that are being experienced by such distributed systems.

For Data Sharing and Distribution: For easy and effective management of the globally shared data, the adopted approach is the MigThread to combine the shared variables to a defined structure. A precompiler or preprocessor performs this task. A MigThread is an application level checkpoint package that supports processes and threads computation. The preprocessor changes the source code and also defines rules to calculate sizes of the structure and patterns and inserts sprintf function calls to adhere partial results together.

For Lock Acquisition: When a local host is the home of a mutex−lock, lock acquisition is the same as in multi-threaded programs. Communication will not be involved. However, if locks home is on a remote machine, its acquisition will incur a page fault. A SIGSEGV signal will be generated, and the SIGSEGV handler examines the data tag to determine its home and sends request to remote home-node for the page containing lock. When it returns the requested page, mprotect() is called to grant local access later. If locks home is on a remote machine and containing page has been fetched, i. e. , the lock can be accessed locally, no SIGSEGV signal is generated. The local working thread needs to apply for the remote mutex−lock explicitly. Remote corresponding stub of the working thread will make a lock request on the remote machine.

Get a custom paper now from our expert writers.

Conclusion and future work

This paper proposes the shared memory access globally using the Multi-Threaded operation. The threads move around the linked data, and due to the local data access, the communication overhead gets reduced, and hence the Multi-Threaded DSM resolves these type of issues that are being experienced by such distributed systems. The variables that are globally assigned are allocated to different computers according to the data items thus, reducing communication costs. The findings demonstrate the effectiveness and also shows that Multi-Threading applications have better performance outcome and are more effective than the Single-Threaded applications because of the network latency masking by the communication and the computation overlap. The paper concludes with, although the findings show that this system works effectively, its performance has a scope of improvement. Due to the introduction of communication channels whenever needed, the runtime execution of the system might slow down. The future work should probably be able to overcome this issue and have a smooth runtime execution and also conducts experiments on real-time applications.

Image of Alex Wood
This essay was reviewed by
Alex Wood

Cite this Essay

Distributed Shared Memory Systems for Multi-threaded Networks. (2020, July 14). GradesFixer. Retrieved April 19, 2024, from https://gradesfixer.com/free-essay-examples/distributed-shared-memory-systems-for-multi-threaded-networks/
“Distributed Shared Memory Systems for Multi-threaded Networks.” GradesFixer, 14 Jul. 2020, gradesfixer.com/free-essay-examples/distributed-shared-memory-systems-for-multi-threaded-networks/
Distributed Shared Memory Systems for Multi-threaded Networks. [online]. Available at: <https://gradesfixer.com/free-essay-examples/distributed-shared-memory-systems-for-multi-threaded-networks/> [Accessed 19 Apr. 2024].
Distributed Shared Memory Systems for Multi-threaded Networks [Internet]. GradesFixer. 2020 Jul 14 [cited 2024 Apr 19]. Available from: https://gradesfixer.com/free-essay-examples/distributed-shared-memory-systems-for-multi-threaded-networks/
copy
Keep in mind: This sample was shared by another student.
  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours
Write my essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

close

Where do you want us to send this sample?

    By clicking “Continue”, you agree to our terms of service and privacy policy.

    close

    Be careful. This essay is not unique

    This essay was donated by a student and is likely to have been used and submitted before

    Download this Sample

    Free samples may contain mistakes and not unique parts

    close

    Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

    close

    Thanks!

    Please check your inbox.

    We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

    clock-banner-side

    Get Your
    Personalized Essay in 3 Hours or Less!

    exit-popup-close
    We can help you get a better grade and deliver your task on time!
    • Instructions Followed To The Letter
    • Deadlines Met At Every Stage
    • Unique And Plagiarism Free
    Order your paper now