Distributed systems and parallel computing
No matter how powerful individual computers become, there are still reasons to harness the power of multiple computational units, often spread across large geographic areas. Sometimes this is motivated by the need to collect data from widely dispersed locations (e.g., web pages from servers, or sensors for weather or traffic). Other times it is motivated by the need to perform enormous computations that simply cannot be done by a single CPU.
From our company’s beginning, Google has had to deal with both issues in our pursuit of organizing the world’s information and making it universally accessible and useful. We continue to face many exciting distributed systems and parallel computing challenges in areas such as concurrency control, fault tolerance, algorithmic efficiency, and communication. Some of our research involves answering fundamental theoretical questions, while other researchers and engineers are engaged in the construction of systems to operate at the largest possible scale, thanks to our hybrid research model .

Recent publications
While profile guided optimizations (PGO) and link time op-timizations (LTO) have been widely adopted, post link optimizations (PLO) have languished until recently when re-searchers demonstrated that late injection of profiles can yield significant improvements. However, the disassembly-driven, monolithic design of post link optimizers face scalingchallenges with large binaries and is at odds...
David Li , Han Shen , Krzysztof Pszeniczny , Rahman Lavaee , Snehasish Kumar , Sriraman Madapusi Tallam
We describe our experience with PLB, a host-based load balancing design for modern networks. PLB randomly changes the paths of connections that experience congestion, preferring idle periods to minimize transport interactions. It does so by changing the IPv6 FlowLabel on the packets of a connection, which switches include as part of the ECMP flow hash. Across many hosts, this action drives down...
Abdul Kabbani , David J. Wetherall , Gautam Kumar , Junhua Yan , Kira Yin , Masoud Moshref , Mubashir Adnan Qureshi , Qiaobin Fu , Van Jacobson , Yuchung Cheng
SIGCOMM (2022)
Numerical simulation often resorts to iterative in-place stencils such as the Gauss-Seidel or Successive Overrelaxation (SOR) methods. Writing high performance implementations of such stencils requires significant effort and time; it also involves non-local transformations beyond the stencil kernel itself. While automated code generation is a mature technology for image processing stencils,...
Mohammed Essadki , Bertrand Michel , Bruno Maugars , Oleksandr Zinenko , Nicolas Vasilache , Albert Cohen
CGO , IEEE (2023)
Receiving the 2020 ACM-IEEE Eckert-Mauchly Award this past June was among the most rewarding experiences of my career. I am grateful to IEEE Micro for giving me the opportunity to share here the story behind the work that led to this award, a short version of my professional journey so far, as well as a few things I learned along the way.
Luiz André Barroso
IEEE Micro , vol. 41(02) (2021) , pp. 78-83
This book describes warehouse-scale computers (WSCs), the computing platforms that power cloud computing and all the great web services we use every day. It discusses how these new systems treat the datacenter itself as one massive computer designed at warehouse scale, with hardware and software working in concert to deliver good levels of internet service performance. The book details the...
Luiz André Barroso , Urs Hölzle , Parthasarathy Ranganathan
Morgan & Claypool Publishers (2018)
Some of our teams
Ai fundamentals and applications, algorithms and optimization, graph mining, network infrastructure, system performance, our researchers work across the world.
Together, our research teams tackle tough problems.
CS 261: Research Topics in Operating Systems (2021)
Some links to papers are links to the ACM’s site. You may need to use the Harvard VPN to get access to the papers via those links. Alternate links will be provided.
Meeting 1 (1/26): Overview
Operating system architectures, meeting 2 (1/28): multics and unix.
“Multics—The first seven years” , Corbató FJ, Saltzer JH, and Clingen CT (1972)
“Protection in an information processing utility” , Graham RM (1968)
“The evolution of the Unix time-sharing system” , Ritchie DM (1984)
Additional resources
The Multicians web site for additional information on Multics, including extensive stories and Multics source code.
Technical: The Multics input/output system , Feiertag RJ and Organick EI, for a description of Multics I/O to contrast with Unix I/O.
Unix and Multics , Tom Van Vleck.
… I remarked to Dennis that easily half the code I was writing in Multics was error recovery code. He said, "We left all that stuff out. If there's an error, we have this routine called panic() , and when it is called, the machine crashes, and you holler down the hall, 'Hey, reboot it.'"
The Louisiana State Trooper Story
The IBM 7094 and CTSS
This describes the history of the system that preceded Multics, CTSS (the Compatible Time Sharing System). It also contains one of my favorite stories about the early computing days: “IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.”
Using Ring 5 , Randy Saunders.
"All Multics User functions work in Ring 5." I have that EMail (from Dave Bergum) framed on my wall to this date. … All the documentation clearly states that system software has ring brackets of [1,5,5] so that it runs equally in both rings 4 and 5. However, the PL/I compiler creates segments with ring brackets of [4,4,4] by default. … I found each and every place CNO had fixed a program without resetting the ring brackets correctly. It started out 5 a day, and in 3 months it was down to one a week.”
Bell Systems Technical Journal 57(6) Part 2: Unix Time-sharing System (July–August 1978)
This volume contains some of the first broadly-accessible descriptions of Unix. Individual articles are available on archive.org . As of late January 2021, you can buy a physical copy on Amazon for $2,996. Interesting articles include Thompson on Unix implementation, Ritchie’s retrospective, and several articles on actual applications, especially document preparation.
Meeting 3 (2/2): Microkernels
“The nucleus of a multiprogramming system” , Brinch Hansen P (1970).
“Toward real microkernels” , Liedtke J (1996).
“Are virtual machine monitors microkernels done right?” , Hand S, Warfield A, Fraser K, Kotsovinos E, Magenheimer DJ (2005).
Supplemental reading
“Improving IPC by kernel design” , Liedtke J (1993). Article introducing the first microbenchmark-performant microkernel.
“Are virtual machine monitors microkernels done right?” , Heiser G, Uhlig V, LeVasseur J (2006).
“From L3 to seL4: What have we learnt in 20 years of L4 microkernels?” , Elphinstone K, Heiser G (2013).
Retained: Minimality as key design principle. Replaced: Synchronous IPC augmented with (seL4, NOVA, Fiasco.OC) or replaced by (OKL4) asynchronous notification. Replaced: Physical by virtual message registers. Abandoned: Long IPC. Replaced: Thread IDs by port-like IPC endpoints as message destinations. Abandoned: IPC timeouts in seL4, OKL4. Abandoned: Clans and chiefs. Retained: User-level drivers as a core feature. Abandoned: Hierarchical process management. Multiple approaches: Some L4 kernels retain the model of recursive address-space construc- tion, while seL4 and OKL4 originate mappings from frames. Added: User-level control over kernel memory in seL4, kernel memory quota in Fiasco.OC. Unresolved: Principled, policy-free control of CPU time. Unresolved: Handling of multicore processors in the age of verification. Replaced: Process kernel by event kernel in seL4, OKL4 and NOVA. Abandoned: Virtual TCB addressing. … Abandoned: C++ for seL4 and OKL4.
Meeting 4 (2/4): Exokernels
“Exterminate all operating systems abstractions” , Engler DE, Kaashoek MF (1995).
“Exokernel: an operating system architecture for application-level resource management” , Engler DE, Kaashoek MF, O’Toole J (1995).
“The nonkernel: a kernel designed for the cloud” , Ben-Yehuda M, Peleg O, Ben-Yehuda OA, Smolyar I, Tsafrir D (2013).
“Application performance and flexibility on exokernel systems” , Kaashoek MF, Engler DR, Ganger GR, Briceño HM, Hunt R, Mazières D, Pinckney T, Grimm R, Jannotti J, Mackenzie K (1997).
Particularly worth reading is section 4, Multiplexing Stable Storage, which contains one of the most overcomplicated designs for stable storage imaginable. It’s instructive: if your principles end up here, might there be something wrong with your principles?
“Fast and flexible application-level networking on exokernel systems” , Ganger GR, Engler DE, Kaashoek MF, Briceño HM, Hunt R, Pinckney T (2002).
Particularly worth reading is section 8, Discussion: “The construction and revision of the Xok/ExOS networking support came with several lessons and controversial design decisions.”
Meeting 5 (2/9): Security
“EROS: A fast capability system” , Shapiro JS, Smith JM, Farber DJ (1999).
“Labels and event processes in the Asbestos operating system” , Vandebogart S, Efstathopoulos P, Kohler E, Krohn M, Frey C, Ziegler D, Kaashoek MF, Morris R, Mazières D (2007).
This paper covers too much ground. On the first read, skip sections 4–6.
Meeting 6 (2/11): I/O
“Arrakis: The operating system is the control plane” (PDF) , Peter S, Li J, Zhang I, Ports DRK, Woos D, Krishnamurthy A, Anderson T, Roscoe T (2014)
“The IX Operating System: Combining Low Latency, High Throughput, and Efficiency in a Protected Dataplane” , Belay A, Prekas G, Primorac M, Klimovic A, Grossman S, Kozyrakis C, Bugnion E (2016) — read Sections 1–4 first (return to the rest if you have time)
“I'm Not Dead Yet!: The Role of the Operating System in a Kernel-Bypass Era” , Zhang I, Liu J, Austin A, Roberts ML, Badam A (2019)
- “The multikernel: A new OS architecture for scalable multicore systems” , Baumann A, Barham P, Dagand PE, Harris T, Isaacs R, Peter S, Roscoe T, Schüpach A, Singhana A (2009); this describes the Barrelfish system on which Arrakis is based
Meeting 7 (2/16): Speculative designs
From least to most speculative:
“Unified high-performance I/O: One Stack to Rule Them All” (PDF) , Trivedi A, Stuedi P, Metzler B, Pletka R, Fitch BG, Gross TR (2013)
“The Case for Less Predictable Operating System Behavior” (PDF) , Sun R, Porter DE, Oliveira D, Bishop M (2015)
“Quantum operating systems” , Corrigan-Gibbs H, Wu DJ, Boneh D (2017)
“Pursue robust indefinite scalability” , Ackley DH, Cannon DC (2013)
Meeting 8 (2/18): Log-structured file system
“The Design and Implementation of a Log-Structured File System” , Rosenblum M, Ousterhout J (1992)
“Logging versus Clustering: A Performance Evaluation”
- Read the abstract of the paper ; scan further if you’d like
- Then poke around the linked critiques
Meeting 9 (2/23): Consistency
“Generalized file system dependencies” , Frost C, Mammarella M, Kohler E, de los Reyes A, Hovsepian S, Matsuoka A, Zhang L (2007)
“Application crash consistency and performance with CCFS” , Sankaranarayana Pillai T, Alagappan R, Lu L, Chidambaram V, Arpaci-Dusseau AC, Arpaci-Dusseau RH (2017)
Meeting 10 (2/25): Transactions and speculation
“Rethink the sync” , Nightingale EB, Veeraraghavzn K, Chen PM, Flinn J (2006)
“Operating system transactions” , Porter DE, Hofmann OS, Rossbach CJ, Benn E, Witchel E (2009)
Meeting 11 (3/2): Speculative designs
“Can We Store the Whole World's Data in DNA Storage?”
“A tale of two abstractions: The case for object space”
“File systems as processes”
“Preserving hidden data with an ever-changing disk”
More, if you’re hungry for it
- “Breaking Apart the VFS for Managing File Systems”
Virtualization
Meeting 14 (3/11): virtual machines and containers.
“Xen and the Art of Virtualization” , Barham P, Dragovic B, Fraser K, Hand S, Harris T, Ho A, Neugebauer R, Pratt I, Warfield A (2003)
“Blending containers and virtual machines: A study of Firecracker and gVisor” , Anjali, Caraz-Harter T, Swift MM (2020)
Meeting 15 (3/18): Virtual memory and virtual devices
“Memory resource management in VMware ESX Server” , Waldspurger CA (2002)
“Opportunistic flooding to improve TCP transmit performance in virtualized clouds” , Gamage S, Kangarlou A, Kompella RR, Xu D (2011)
Meeting 16 (3/23): Speculative designs
“The Best of Both Worlds with On-Demand Virtualization” , Kooburat T, Swift M (2011)
“The NIC is the Hypervisor: Bare-Metal Guests in IaaS Clouds” , Mogul JC, Mudigonda J, Santos JR, Turner Y (2013)
“vPipe: One Pipe to Connect Them All!” , Gamage S, Kompella R, Xu D (2013)
“Scalable Cloud Security via Asynchronous Virtual Machine Introspection” , Rajasekaran S, Ni Z, Chawla HS, Shah N, Wood T (2016)
Distributed systems
Meeting 17 (3/25): distributed systems history.
“Grapevine: an exercise in distributed computing” , Birrell AD, Levin R, Schroeder MD, Needham RM (1982)
“Implementing remote procedure calls” , Birrell AD, Nelson BJ (1984)
Skim : “Time, clocks, and the ordering of events in a distributed system” , Lamport L (1978)
Meeting 18 (3/30): Paxos
“Paxos made simple” , Lamport L (2001)
“Paxos made live: an engineering perspective” , Chanra T, Griesemer R, Redston J (2007)
“In search of an understandable consensus algorithm” , Ongaro D, Ousterhout J (2014)
- Adrian Colyer’s consensus series links to ten papers, especially:
- “Raft Refloated: Do we have consensus?” , Howard H, Schwarzkopf M, Madhavapeddy A, Crowcroft J (2015)
- A later update from overlapping authors: “Paxos vs. Raft: Have we reached consensus on distributed consensus?” , Howard H, Mortier R (2020)
- “Understanding Paxos” , notes by Paul Krzyzanowski (2018); includes some failure examples
- One-slide Paxos pseudocode , Robert Morris (2014)
Meeting 19 (4/1): Review of replication results
Meeting 20 (4/6): project discussion, meeting 21 (4/8): industrial consistency.
“Scaling Memcache at Facebook” , Nishtala R, Fugal H, Grimm S, Kwiatkowski M, Lee H, Li HC, McElroy R, Paleczny M, Peek D, Saab P, Stafford D, Tung T, Venkataramani V (2013)
“Millions of Tiny Databases” , Brooker M, Chen T, Ping F (2020)
Meeting 22 (4/13): Short papers and speculative designs
“Scalability! But at what COST?” , McSherry F, Isard M, Murray DG (2015)
“What bugs cause production cloud incidents?” , Liu H, Lu S, Musuvathi M, Nath S (2019)
“Escape Capsule: Explicit State Is Robust and Scalable” , Rajagopalan S, Williams D, Jamjoom H, Warfield A (2013)
“Music-defined networking” , Hogan M, Esposito F (2018)
- Too networking-centric for us, but fun: “Delay is Not an Option: Low Latency Routing in Space” , Handley M (2018)
- A useful taxonomy: “When Should The Network Be The Computer?” , Ports DRK, Nelson J (2019)
Meeting 23 (4/20): The M Group
“All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications” , Pillai TS, Chidambaram V, Alagappan R, Al-Kiswany S, Arpaci-Dusseau AC, Arpaci-Dusseau RH (2014)
“Crash Consistency Validation Made Easy” , Jiang Y, Chen H, Qin F, Xu C, Ma X, Lu J (2016)
Meeting 24 (4/22): NVM and Juice
“Persistent Memcached: Bringing Legacy Code to Byte-Addressable Persistent Memory” , Marathe VJ, Seltzer M, Byan S, Harris T
“NVMcached: An NVM-based Key-Value Cache” , Wu X, Ni F, Zhang L, Wang Y, Ren Y, Hack M, Shao Z, Jiang S (2016)
“Cloudburst: stateful functions-as-a-service” , Sreekanti V, Wu C, Lin XC, Schleier-Smith J, Gonzalez JE, Hellerstein JM, Tumanov A (2020)
- Adrian Colyer’s take
Meeting 25 (4/27): Scheduling
- “The Linux Scheduler: A Decade of Wasted Cores” , Lozi JP, Lepers B, Funston J, Gaud F, Quéma V, Fedorova A (2016)
Design and control of distributed computing systems (operating systems and database systems). Topics include principles of naming and location, atomicity, resource sharing, concurrency control and other synchronization, deadlock detection and avoidance, security, distributed data access and control, integration of operating systems and computer networks, distributed systems design, consistency control, and fault tolerance.
Note: Will not be offered through CEE due to low enrollment.
This course will be available in the Continuing Engineering Education program.--> A more detailed course description prepared for the CEE program is available, as is a course preview briefing containing more detailed information on requirements and expectations. The course outline is given below.
To provide additional support the CEE program, Professor Clifton will be available during office hours through H.323/T.120 desktop videoconferencing (e.g., SunForum , Microsoft NetMeeting .) Please send email if you wish to make use of this, or you might try opening an H.323 connection to blitz.cs.purdue.edu.
More course information may be available in WebCT ( direct link ).
Please add yourself to the course mailing list. Send mail to [email protected] containing the line:
add your email to cs603
Feel free to send things to the course mailing list if you feel it is appropriate. An example might be a pointer to a particularly helpful on-line manual describing an API used in one of the projects.
Course Methodology
The course will be taught through lectures, with class participation expected and encouraged. There will be frequent reading assignments to supplement the lectures.
For now, Professor Clifton will not have regular office hours. Feel free to drop by anytime, or send email with some suggested times to schedule an appointment. You can also try H.323/T.120 desktop videoconferencing (e.g., SunForum , Microsoft NetMeeting .) You can try opening an H.323 connection to blitz.cs.purdue.edu - send email if there is no response.
Prerequisites
The official requirement is CS 503 (Operating systems), with CS 542 (Distributed Database systems) recommended. The practical requirement is a solid undergraduate background in computer science including some database and operating systems theory, and substantial programming experience. If you don't have 503, but feel you have sufficient background, please send me an explanation of why you feel you are prepared, along with a number/times for me to call and discuss approving your registration.
The following is recommended (it will be a useful reference for much of the lab work in the course):
Internetworking with TCP/IP Vol.III: Client-Server Programming and Applications, D. E. Comer and D. Stevens, Prentice Hall, (choose appropriate version for your favorite platform), 0-13-032071-4
The following have been recommended in the past, and may provided useful background reading. However, none are required.
Distributed Systems, 1993 Sape Mullender Prentice Hall 0-201-62427-3 Distributed Algorithms, 1997 Nancy Lynch Morgan Kaufmann 1-55860-348-4 Distributed Operating Systems, 1995 Tanenbaum Prentice Hall 0-13-219908-4
Evaluation/Grading:
Evaluation will be a subjective process, however it will be based primarily on your understanding of the material as evidenced in:
- Midterm Exam (25%)
- Final Exam (35%)
- Projects (4-5) (40%)
Exams will be open note / open book. To avoid a disparity between resources available to different students, electronic aids are not permitted. (If everyone has a notebook with wireless connection and all agree they want to use them in the exams, I could relax this.)
I will evaluate projects on a five point scale:
A substantial portion of your education in this course will come through performing programming projects: building components of a distributed system. Some examples of what projects might involve are:
- Building a server capable of handling multiple simultaneous TCP/IP connections using the Socket API. The server would be trivial (e.g., calculate the square of the input and return the result after a five second delay), the key effort would be the API.
- Implement an application that connects to a (provided) CORBA server.
- Implement a clock synchronization protocol.
My current expectation is that all projects will be done individually, as it is probable that some of the CEE students will not be collocated with other students in the course.
Note on Network Access : If you will be doing your project work for the course at a site that is behind a firewall, let me know as soon as possible. Some of the projects will involve connecting to an on-campus server, and if that will involve a firewall on your end I need to know so I can ensure that the ports used are not blocked.
Policy on Intellectual Honesty
Please read the above link to the policy written by Professor Spafford . This will be followed unless I provide written documentation of exceptions.
Late work will be penalized except in case of documented emergency (e.g., medical emergency), or by prior arrangement if doing the work in advance is impossible due to fault of the instructor (e.g., you are going to a conference and ask to start the project early, but I don't have it ready yet.)
The penalty for late work is 1 point (of the possible 5) if turned in after the deadline, and one additional point for each week late.
Syllabus (numbers correspond to week):
Project start/due dates are tentative!
- Course overview , Components of a distributed system
- Message Passing
- Stream-oriented communications
- Remote Procedure Call
- Remote Method Invocation
- DCE RPC ( reading )
- Java RMI ( reading )
- SOAP (Reading: SOAP 1.1 spec , XML Protocol Working Group , Apache SOAP )
- Active Directory ( reading )
- What is clock synchronization? Leslie Lamport, " Time, clocks, and the ordering of events in a distributed system ", Communications of the ACM 21(7) (July 1978).
- Possibility and impossibility Lundelius, J. and Lynch, N., " An Upper and Lower Bound for Clock Synchronization ," Information and Control, Vol. 62, Nos. 2/3, pp. 190-204, 1984. Danny Dolev, Joe Halpern, and H. Raymond Strong, " On the possibility and impossibility of achieving clock synchronization ", Journal of Computer and System Sciences 32(3) 230-250. April 1986. Michael J. Fischer, Nancy A. Lynch, and Michael Merritt, " Easy impossibility proofs for distributed consensus problems " Proceedings of the fourth annual symposium on Principles of distributed computing 1985 , Minaki, Ontario, Canada.
- Practical solution: NTP ( Reading )
Other Reading: Leslie Lamport and P. M. Melliar-Smith, " Synchronizing clocks in the presence of faults " Journal of the ACM 32(1) (January 1985). Jennifer Lundelius and Nancy Lynch, " A new fault-tolerant algorithm for clock synchronization , Proceedings of the third annual ACM symposium on Principles of distributed computing 1984 , Vancouver, British Columbia, Canada.
- Overview : Global State, Mutual Exclusion Leslie Lamport, `` The Mutual Exclusion Problem '', Journal of the ACM 33(2) (April 1986). Read Part II section 2 - the rest is optional. Leslie Lamport, `` 1983 Invited address: Solved problems, unsolved problems and non-problems in concurrency , Proceedings of the third annual ACM symposium on Principles of distributed computing , 1984, Vancouver, British Columbia, Canada. Optional - Global State: K. Mani Chandy and Leslie Lamport, `` Distributed Snapshots: Determining Global States of Distributed Sytems '', ACM Transactions on Computer Systems 3(1) (February 1985) 63-75.
- Fault Tolerant Solutions Michael J. Fischer, Nancy A. Lynch, James E. Burns and Allan Borodin, `` Distributed FIFO allocation of identical resources using small shared space '' ACM Transactions on Programming Languages and Systems 11(1) (1989) pp. 90-114.
- Multiple resources Requirements Please don't check these out - others may want to read them. Dijkstra, E. `` Hierarchical Ordering of Sequential Processes '', ACTA Informatica 1 (1971), 115-138. M. Rabin and D. Lehmann, ``On the Advantages of Free Choice: A Symmetric and Fully Distributed Solution to the Dining Philosophers Problem'', Proceedings of the 8th Symposium on Principles of Programming Languagues (1981) pp. 133-138.
- 2-Phase Commit
- Formal Models for failure and recovery
- 3-Phase Commit
- Basics Reading: Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman, Concurrency Control and Recovery in Database Systems , Chapter 8: Replicated Data , Addison Wesley, 1987.
- Example: Replication in Oracle
- Advanced Techniques: Quasi-Copies Reading: Rafael Alonso, Daniel Barbará , and Hector Garcia-Molina, `` Data caching issues in an information retrieval system '', ACM Transactions on Database Systems (TODS) 15(3), September 1990.
- Mid-Semester Review March 8, in class: Midterm on material from weeks 1-7. Please advise if this is a problem. -->
- Threads vs. Processes, Code migration basics
- Mobile Agents
- Mobile Agents example: D'Agents Reading: D'Agents web site , position paper.
- Distributed Object systems: CORBA ( OMG ) Reading: CORBA Overview from The Common Object Request Broker: Architecture and Specification , OMG group , 2001. CORBA Security Service ( reading ). Third project due April 3 , fourth project starts.
- DCOM Reading: DCOM vs. .NET
- Distributed Coordination: Jini . Further reading: Jan Newmarch's Guide to JINI Technologies .
- Failure models . Reading: Dr. Flaviu Cristian , Understanding Fault-Tolerant Distributed Systems , Communications of the ACM 34(2) February 1991.
- Fault Tolerance Reading: Felix C. Gärtner, Fundamentals of Fault-Tolerant Distributed Computing in Asynchronous Environments ACM Computing Surveys 31(1), March 1999.
- Reliable communication
- Recovery Optional reading: Richard Golding and Elizabeth Borowsky, Fault-Tolerant Replication Management in Large-Scale Distributed Storage Systems , in Proceedings of the 18th IEEE Symposium on Reliable Distributed Systems 18-21 October, 1999, Lausanne, Switzerland. Hector Garcia-Molina, Christos A. Polyzois and Robert B. Hagmann, Two Epoch Algorithms for Disaster Recovery , in Proceedings of the 1990 conference on Very Large Data Bases , Brisbane, Australia, August 13-16 1990.
Final exam Thursday, May 2, 2002 from 1:00pm to 3:00pm in RHPH 164.
CS595: Hot Topics in Distributed Systems: Data-Intensive Computing
Quarter: Fall 2010 Lecture Time: Monday/Wednesday, 1:50PM - 3:15PM Lecture Location: Stuart Building 106 Office Hours Time: Wednesday, 3:15PM - 4:15PM Office Hours Location: Stuart Building 237D Professor: Dr. Ioan Raicu ([email protected] )
The support for Data Intensive Computing is critical to advancing modern science as storage systems have experienced an increasing gap between its capacity and its bandwidth by more than 10-fold over the last decade. There is an emerging need for advanced techniques to manipulate, visualize and interpret large datasets. Building large scale distributed systems that support data-intensive computing involves challenges at multiple levels, from the network (e.g., transport, routing) to the algorithmic (e.g., data distribution, resource management) and even the social (e.g., incentives). This course is a tour through various research topics in distributed systems, covering topics in cluster computing, grid computing, supercomputing, and cloud computing. We will explore solutions and learn design principles for building large network-based computational systems to support data intensive computing. Our readings and discussions will help us identify research problems and understand methods and general approaches to design, implement, and evaluate distributed systems to support data intensive computing. Topics include resource management (e.g. discovery, allocation, compute models, data models, data locality, virtualization, monitoring, provenance), programming models, application models, and system characterization. Our discussions will often be grounded in the context of deployed distributed systems, such as the TeraGrid, Amazon EC2 and S3, various top supercomputers (e.g. IBM BlueGene/P, Sun Constellation, Cray XT5), and various software/programming platforms (e.g. Google's MapReduce, Hadoop, Dryad, Sphere/Sector, Swift/Falkon, and Parrot/Chirp). The course involves lectures, outside invited speakers, discussions of research papers, and a major project (including both a written report and an oral presentation).
Lecture topics:
Last modified: July 07, 2011
Brief Biography
Xiaohui (Helen) Gu is a full professor in the Department of Computer Science at the North Carolina State University . She received her PhD degree in 2004 and MS degree in 2001 from the Department of Computer Science , University of Illinois at Urbana-Champaign . She received her BS degree in computer science from Peking University , Beijing , China in 1999. She was a research staff member at IBM T. J. Watson Research Center , Hawthorne , New York , between 2004 and 2007. Dr. Gu received ILLIAC fellowship, David J. Kuck Best Master Thesis Award, and Saburo Muroga Fellowship from University of Illinois at Urbana-Champaign. She also received the IBM Invention Achievement Awards in 2004, 2006, and 2007. She has filed 9 patents, and has published more than 80 research papers in international journals and major peer-reviewed conference proceedings. Dr. Gu is a recipient of NSF Career Award, four IBM Faculty Awards 2008, 2009, 2010, 2011, and two Google Research Awards 2009, 2011, best paper awards from ICDCS 2012, CNSM 2010, and NCSU Faculty Research and Professional Development Award. She served as program co-chair for IEEE/ACM IWQoS 2013 and USENIX ICAC 2014. She is an associate editor for IEEE Transactions for Parellel and Distributed Systems (TPDS). She is a Senior Member of IEEE and a member of ACM. She was on sabbatical at Google as a visiting scientist in 2015. She also founded InsightFinder , a NCSU startup company commercializaing cloud management technologies invented by her research group. One of the unsupervised machine leaning based anomaly detection technologoies has been licensed to Google.
Students: I am looking for self-motivated PhD students with strong system building skills to join my research group. Several RA positions are available. Please send me an email with your CV and TOFEL/GRE scores.
- General research interests: Systems and Networking
- Current research focus: Autonomic Computing, Cloud Computing, Accoutable Distributed Systems
- Publications ( Full List , DBLP , Google Scholar )
- Research Group: DANCE ( D istributed system research on A utonomy, resilie N ce, C ollaboration, and E nergy)
- Research Sponsors: NSF, ARO, NSA, IBM, Google, Credit Suisse, CACC, SOSI/ARO, NCSU
- All Funded Projects
- Spring 2021, CSC 724: Advanced Distributed Systems
- Spring 2021, CSC 246: Concepts of Operating Systems
- Spring 2020, CSC 724: Advanced Distributed Systems
- Spring 2020, CSC 501: Operating System Principles
- Spring 2020, CSC 801: System Group Seminar
- Spring 2019, CSC 724: Advanced Distirbuted Systems
- Spring 2019, CSC 501-002: Operating System Principles
- Spring 2018, CSC 724: Advanced Distributed Systems.
- Spring 2018, CSC 501-001: Operating System Principles.
- Spring 2018, CSC 501-601: Operating System Principles (Distance Learning)
- Spring 2017, CSC 724: Advanced Distributed Systems.
- Spring 2017, CSC 501: Operating System Principles.
- Spring 2017, CSC 801: Systems Group Seminar.
- Spring 2014, CSC 724: Advanced Distributed Systems.
- Spring 2014, CSC 501: Operating System Principles.
- Spring 2013, CSC 724: Advanced Distributed Systems.
- Spring 2013, CSC 501: Operating Systems Principles.
- Spring 2013, CSC 801: Systems Group Seminar.
- Spring 2012, CSC 724: Advanced Distributed Systems.
- Fall 2011, CSC 246: Concepts of Operating Systems.
- Spring 2011, CSC 724: Advanced Distributed Systems.
- Fall 2010, CSC 501: Operating Systems Principles.
- Spring 2010, CSC 724: Advanced Distributed Systems.
- Spring 2010, CSC 801: Seminar in Computer Science.
- Fall 2009, CSC 246: Concepts of Operating Systems.
- Spring 2009, CSC 724: Advanced Distributed Systems.
- Fall 2008, CSC 246: Concepts of Operating Systems.
- Fall 2008, CSC 801: Seminar in Computer Science.
- Spring 2008, CSC591D-006: Special Topics on Distributed Systems.
Selected Profesional Service
- Faculty advsior for Women in Computer Science (WiCS) at NCSU, 2010-present.
- Associate Editor, IEEE Transactions on Parallel and Distributed Systems (TPDS), 2014 - present.
- Panelist for NSF and NIH.
- Program Co-Chair, USENIX International Conference on Autonomic Computing (ICAC) , Philadelphia, PA, 2014. (part of USENIX Federated Conference Week)
- Workshop Chair, IEEE International Conference on Cloud Engineering (IC2E), Boston, MA, 2014.
- Program Co-Chair, IEEE International Symposium on Quality-of-Service (IWQoS) , Montreal, Quebec, Canada, 2013.
- Proceedings Co-Chair, IEEE International Conference on Cluster Computing (Cluster), Crete, Greece, 2010.
- Program Co-Chair (work in progress track), IEEE International Conference on Pervasive Computing and Communications(PerCom), New York , 2007.
- Advice Collection for graduate students
- Teaching Academic calendar | CSC courses | Wolfware
- Funding sources CISE deadlines | NSF Grant Proposal Guide | NSF Fastlane | DOE grants | Account reports | NCSU PINS
- Daily essential Library | Registration & Records | OISSS | NCSU Webmail | Graduate Database

Top 10 Research Topics in Parallel and Distributed Computing
The specific pressure in locations of the internet with concurrent enhancement in the availability of big data with several users has to accurate the computing tasks in parallel. Parallel and distributed computing will take place in several research areas such as networks, software engineering, computer science, computer architecture, operating systems, algorithms, etc. At present, our research experts are providing complete research support and research guidance for all the research topics in parallel and distributed computing. The ideas based on an essential system of parallel and distributed computing are highlighted below shared memory models, mutual exclusion, concurrency, message passing, memory manipulation, etc.
Parallel computing is deployed for the provision of high-speed power of processing where it is required and supercomputers are the best example for parallel computing. In this process, distributed computing is accustomed when the geographical locations are differing for the computers.
- Software-defined fog node in blockchain architecture & cloud computing
- Multi clustering approach in mobile edge computing
- Distributed computing & smart city services
- Geo distributed fog computing
- Service attacks in software-defined network with cloud computing
- Distributed trust protocol for IaaS cloud computing
- Large scale convolutional neural networks
- Parallel vertex-centric algorithms
- Partitioning algorithms in mobile environments
- Configuration tuning for hierarchical cloud schedulers
- Distributed computing with delay tolerant network
We provide the research work with the implementation of research algorithms, methodologies that shape the research projects with the proper execution and appropriate code implementation.

Parallel Computing
Parallel computing delivers the simultaneous process and it is used to save both money and time. In general, the memory in a parallel system might be two-dimensional such as disseminated and collective . The processors in parallel computing have to perform numerous tasks which are assigned to the processors concurrently.
Distributed Computing
Distributed computing is entirely different from the parallel computing process because here in distributed computing a task is separated between several computers . In addition, the computers can pass the messages among them and the shared memory is not used. Several autonomous computers appear as one computer for the users.
What are the Characteristics of Parallel and Distributed Computing?
- It is based on the process of communication and location of what the node have to access to other nodes
- It is the process of detecting failures and how the system is recovered as soon as possible
- The process of computing and processing has some accumulation in several machines
- The process similar task happing in several machines at a particular time
- The frankness of software structure and its enhancement
- Distribution of hardware and software data
Parallel Computing Versus Distributed Computing
- Parallel and distributed computing are different from each other. In distributed computing, several computers are seemed together as a single system for the users to perform a single task by messages among them. In parallel computing, a single task is split into several tasks and allocated to various system
- In distributed computing, the high scalability place is required for usage. In parallel computing, the place which has high speed is preferred for the performance
- A similar master clock is used for the synchronization in parallel computing and the synchronization algorithms are used in distributed computing
- Distributed computing is used to have their memory and processers. In parallel computing, one memory is shared for all the processors
- The limited scalability is used in parallel computing and without any limitations the systems are working in addition to networks in distributed computing
- There is no dependency in distributed computing and parallel, it is fully dependent on each other because the output of one process is the input of the next task
- A single system is involved in parallel computing as multiple hosts. In distributed computing, many systems are available in the computer system
Distributed Parallel Computing
The process of distributed parallel computing system is deployed for the functions of several computers in a single network with their allocated task . In general, we are using many applications based on the distributed and parallel computing system such as
- Grid Computing
- Cloud computing
- Distributed supercomputers
- Travel reservation
- Electronic banking
- Cloud storage system
- Internet, intranet & email system
- Peer to peer network
Below, our research experts have mentioned the pioneering research topics in parallel and distributed computing , it is a significant research system and it is used to locate the various geographical locations through computers . As per the data, the research fields in parallel and distributed computing are as follows
Recent Research Areas of Parallel and Distributed Computing
- Heterogeneous computing
- Biological & molecular computing
- Supercomputing
- Computational intelligence
- Quality development using HPC
- Distributed data storage & cloud architecture
- Federated ML & shared memory
- Fault tolerance software system
- MI & AL
- Distributed grid computing
- Web technologies
- Distribution & management system in multimedia
- Mobile crowdsensing
- IoT & multi-tier computing
At present, we can see the issues from different sources in parallel and distributed computing . Thus, our research experts provide better research solutions for all such research challenges mentioned below. At this moment, let us discuss the significant research challenges in parallel and distributed computing.
Latest Research Issues of Parallel and Distributed Computing
- There are some additional functions in this process such as logging, intelligence, load balancing, monitoring, etc. All the above-mentioned functions are used for the visibility
- There is no appropriate message communication (messages are delivered wrongly to other nodes) that seems to breakdown the communication
- It is used to coordinate the sequence of changes in data and it is a complex issue in the distributed computing process and it leads the nodes to fall, stop and start
- It is the outline of multi-purpose stream processing
- It leads to let down the functions of nodes
- Functions of structural designs used in the linear process with rational quantity
- It functions as a warehouse of data in significant corporations
Thus, by solving all such research challenges in parallel and distributed computing our technical experts have shared some significant requirements of parallel and distributed computing. And, that helps the research scholars to be familiar with the most substantial real-time requirements in current research topics in parallel and distributed computing research.
Future Research Directions of Parallel and Distributed Computing Projects
- Distributed memory parallel computing
- Distinctive purpose & hybrid structural design
- Accelerators & multicore functions
- Cloud Computing
- High performance & shared memory computing
- Developing domain applications
- Structure of supercomputing & applications
The research scholars can get the best guidance for handling parallel and distributed computing tools from our research and development experts. In this regard let us see about some of the important and best-distributed computing tools below
Development Tools for Parallel and Distributed Computing Projects
- DAGH & CUDA
- ARCH & MPICH
- PPGP & PADE
- Zabbix & Nimrod
- SPRNG & Apache Hadoop
- Paralib & simGrid
- Alchemi & distributed folding GUI
To this end, we believe that you get the top to bottom way out to select the research topics in parallel and distributed computing. The above information will make you a better research scholar to precede your research in parallel and distributed computing. Yet, if you want to become an expert, then you must need a better tutor. In addition, we have several research experts for the scholar’s research assistance . We are ready to provide help and clear up all your difficulties at any stage. So, you can enrich your skills through our keen help.
Related Pages

YouTube Channel
Unlimited Network Simulation Results available here.
PhD Research Topics in Parallel and Distributed Systems
“Distributed systems have the aid of high speed, fault tolerance, and scalability.” Now that it has the best features, it will combine with the parallel system. This blend brought success in so many research works.
Our only motto is to satisfy our clients and make them feel proud………
PhD Research Topics in Parallel and Distributed Systems will teach you to select your research work. Firstly, we want you to be clear about what you need in your work. Your research is the key to open your future.
To list, we have put forth the research titles for your input………

TWO VIBRATING AREAS WITH THEIR KEY CONCEPTS
Parallel systems.
- Bio-inspired Parallel Architecture
- Multi-Core and Multi-Processor
- Crowdsourcing in Mobile Computing
- And also in Hadoop Files
Distributed Systems
- Streaming Computations
- Mobile Edge Computing
- Backscatter and also Low-Power Networks
- Digital Virtual Environment
- And also Wireless Urban Computing
PhD Research Topics in Parallel and Distributed Systems will work hard and work smart in your research. To put it another way, this field comes as an answer to explore the latent of the hardware.
We can train the hardware to execute your research work a lot fast………
PhD Research Topics in Parallel and Distributed Systems will also help you to point out the way and make choices. After that, it will give you 100% support to build the choice into good research. Particularly, you can also stay in touch with your research.
PROS OF PARALLEL AND DISTRIBUTED SYSTEMS
- Flexibility
- Scalability
- High Performance
- And also Reliability
We have well-trained mavens to conduct deep research in your area. This will also turn your research idea into a masterpiece parallel and distributed computing using python programming.
You will find a sea change in your research; When you work with us, you will feel the change!!!
In summary, let us also check out the newest topics in this field,
A new process for Real-Time Parallel Computing Plan intended at Implementation of Point/Small Target Detection Algorithm in Visible/Infra-Red Video
A new source used for Relaxation-Based on Network Decomposition Algorithm intended for Parallel Transient Stability Simulation with Improved Convergence scheme
An innovative method for MPI Scaling Up designed for Powerlist Based on Parallel Programs
The novel method for Adaptive Barrier Algorithm in MPI Based on Analytical Evaluations for Communication Time in the LogP Model of Parallel Computation
An effective performance for Parallelizing Machine Learning Optimization Algorithms on Distributed Data-Parallel Platforms with Parameter Server
An inventive scheme for Massive Hypergraph Partitioning with Neighborhood Expansion
An ingenious method for Transform Blockchain into Distributed Parallel Computing Architecture for Precision Medicine scheme
An efficient performance for Hardware Cost and Energy Consumption Optimization used for Safety-Critical Applications on HDESs
The fresh function of Novel Control Approaches intended for Demand Response with Real-Time Pricing by Parallel and Distributed Consensus-Based ADMM
An effectual performance for Development of Advanced Parallel MVMO-SH used for Voltage Control in Distribution Systems
A new process for Migratory Heterogeneity-Aware Data Layout Scheme aimed at Parallel File Systems
An inventive process for Parallel Model Checking based on Pushdown Systems
On the use of Fault Tree Analysis in Cloud-Based Decision Support System for Self-Healing in Distributed Automation Systems
An effectual process for Distributed coordination control designed for suppressing circulating current in parallel inverters of islanded Microgrid practice
An efficient method for Distributed Louvain Algorithm meant for Graph Community Detection scheme
A new source for Design function based on Visual Front-End for Parallel Signal Processing on Underwater Search Drone system
The novel process for Optimistic Modeling and Simulation of Complex Hardware Platforms and Embedded Systems based on Many-Core HPC Clusters
An inventive performance for Exploiting Task-Based on Parallelism for Parallel Discrete Event Simulation
An innovative mechanism for Cyber-Physical-Social System with Parallel Learning for Distributed Energy Management of a Microgrid
A new-fangled mechanism for Flattened Metadata Service designed for Distributed File Systems
An effective mechanism for Distributed Dispatch Approach intended for Bulk AC/DC Hybrid Systems with High Wind Power Penetration

Why Work With Us ?
Senior research member, research experience, journal member, book publisher, research ethics, business ethics, valid references, explanations, paper publication, 9 big reasons to select us.
Our Editor-in-Chief has Website Ownership who control and deliver all aspects of PhD Direction to scholars and students and also keep the look to fully manage all our clients.
Our world-class certified experts have 18+years of experience in Research & Development programs (Industrial Research) who absolutely immersed as many scholars as possible in developing strong PhD research projects.
We associated with 200+reputed SCI and SCOPUS indexed journals (SJR ranking) for getting research work to be published in standard journals (Your first-choice journal).
PhDdirection.com is world’s largest book publishing platform that predominantly work subject-wise categories for scholars/students to assist their books writing and takes out into the University Library.
Our researchers provide required research ethics such as Confidentiality & Privacy, Novelty (valuable research), Plagiarism-Free, and Timely Delivery. Our customers have freedom to examine their current specific research activities.
Our organization take into consideration of customer satisfaction, online, offline support and professional works deliver since these are the actual inspiring business factors.
Solid works delivering by young qualified global research team. "References" is the key to evaluating works easier because we carefully assess scholars findings.
Detailed Videos, Readme files, Screenshots are provided for all research projects. We provide Teamviewer support and other online channels for project explanation.
Worthy journal publication is our main thing like IEEE, ACM, Springer, IET, Elsevier, etc. We substantially reduces scholars burden in publication side. We carry scholars from initial submission to final acceptance.
Related Pages
Phd Research Topics In Grid Computing
Phd Research Topics In Soft Computing
Phd Research Topics In Ifogsim
Phd Research Topics In Fog Computing
Phd Research Topics In Image Mining
Phd Research Topics In Context Aware Computing
Phd Research Topics In Internet Computing
Phd Research Topics In Distributed Computing
Phd Research Topics In Digital Forensics
Phd Research Topics In Information Security
Phd Research Topics In Software Defined Networking
Phd Research Topics In Information Technology
Phd Research Topics In Dependable Secure Computing
Phd Research Topics In Software Defined Cloud Networking
Phd Research Topics In Dependable And Secure Computing
Our Benefits
Throughout reference, confidential agreement, research no way resale, plagiarism-free, publication guarantee, customize support, fair revisions, business professionalism, domains & tools, we generally use, wireless communication (4g lte, and 5g), ad hoc networks (vanet, manet, etc.), wireless sensor networks, software defined networks, network security, internet of things (mqtt, coap), internet of vehicles, cloud computing, fog computing, edge computing, mobile computing, mobile cloud computing, ubiquitous computing, digital image processing, medical image processing, pattern analysis and machine intelligence, geoscience and remote sensing, big data analytics, data mining, power electronics, web of things, digital forensics, natural language processing, automation systems, artificial intelligence, mininet 2.1.0, matlab (r2018b/r2019a), matlab and simulink, apache hadoop, apache spark mlib, apache mahout, apache flink, apache storm, apache cassandra, pig and hive, rapid miner, support 24/7, call us @ any time, +91 9444829042, [email protected]
Questions ?
Click here to chat with us
Information
- Author Services
Initiatives
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

- Active Journals
- Find a Journal
- Proceedings Series
- For Authors
- For Reviewers
- For Editors
- For Librarians
- For Publishers
- For Societies
- For Conference Organizers
- Open Access Policy
- Institutional Open Access Program
- Special Issues Guidelines
- Editorial Process
- Research and Publication Ethics
- Article Processing Charges
- Testimonials
- SciProfiles
- Encyclopedia

Journal Menu
- Applied Sciences Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor's Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
- arrow_forward_ios Forthcoming issue arrow_forward_ios Current issue
- Vol. 13 (2023)
- Vol. 12 (2022)
- Vol. 11 (2021)
- Vol. 10 (2020)
- Vol. 9 (2019)
- Vol. 8 (2018)
- Vol. 7 (2017)
- Vol. 6 (2016)
- Vol. 5 (2015)
- Vol. 4 (2014)
- Vol. 3 (2013)
- Vol. 2 (2012)
- Vol. 1 (2011)
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Special Issue "Distributed Computing Systems and Applications"
- Print Special Issue Flyer
Special Issue Editors
Special issue information.
- Published Papers
A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section " Computing and Artificial Intelligence ".
Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 4459

Share This Special Issue

Dear Colleagues,
Over the last few decades, trends in the computing industry have been towards distributed, low-cost, and high-volume units. Therefore, this Special Issue is dedicated to distributed systems, whose components are located on different networked computers and which communicate and coordinate their actions by passing messages to one another. Currently, there is a wide spectrum of types of distributed systems varying from SOA-based systems to massively multiplayer online games and peer-to-peer applications.
The control of distributed systems is a well-known challenge which requires complex computational software referred to as distributed computing. Therefore, authors should demonstrate new methods allowing to increase distributed system performance, for instance, by rebalancing resource loads and thereby avoiding networking failures caused by node overstrain.
Particularly welcome will be works that validate, at the experimental level, improved networking performance by managing resource loads and hence preventing system failures. Since such systems are generally required to operate across the Internet and different administrative domains, new algorithms fulfilling these scalability requirements without loss of performance will be a valuable contribution to the Special Issue.
We invite authors interested in the proposed topics to contribute to this Special Issue by publishing their results of research related, but not limited, to the following topics: multiprocessing, multicomputing, cybersecurity for distributed systems applications, programming paradigms for distributed systems, and load balancing algorithms.
Prof. Dr. Volodymyr Mosorov Dr. Jacek Kucharski Guest Editors
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website . Once you are registered, click here to go to the submission form . Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2300 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Published Papers (4 papers)

Further Information
Mdpi initiatives, follow mdpi.

Subscribe to receive issue release notifications and newsletters from MDPI journals
- Will you write my paper for me? - Yes, we will.
What we offer:, let’s write a paper for you in no time, follow these 4 simple steps and solve you problem at once.
Provide details such as your topic, the number of pages, and extra requirements, and we’ll do a paper for you in no time!
Log in to your personal account to know the current status of your paper(s). You can also turn to our support team for the same purpose. Enjoy your life while we're working on your order.
As soon as we write the paper(s) for you, check it for correctness, and if everything is good to go, just download it and enjoy the results.
Our customers’ feedback
Still hesitant just look: others have already used our services and were pleased with the results.
Thank you guys for the amazing work! I got an A, and my professor was impressed. You have done the impossible, and I will never forget your help! The best service ever!
I ordered my paper two weeks ago and received it on time. The quality is very good, much better than other companies provide. My support agent is a pro, fast and simple explanations. Thanks!
I am firmly convinced that you will never disappoint me because you haven’t done it before. Amazing approaches and solutions at perfect prices! Please continue working the way you do!
I’ve been using WritePaperFor.me for about five months, and I have nothing to complain about. Excellent quality, perfect grammar, delivery on time, nice support team, pleasant prices, amazing results.
This service helped me focus on my job, and I will never forget the support I received. I’ve got a promotion in the end! Thanks a lot for everything you do for people like me!
I have to admit that searching for a reliable and professional service was a tough quest. Nevertheless, I am happy that I managed to find writepaperforme! Everything is much better than I expected!
The best bargain is just a few clicks away!
Get an original paper that doesn’t cost a fortune!
- 450 orders completed daily
- 86 disciplines of expertise
- 820+ professionals on our team
- 4.88/5 is the average order rating
Still have questions?
Contact our support agents and let them help you!
Is it time to write a paper for you? Contact us and relish the highest academic performance!
Our professionals will do their best!
You’ll write my paper for me, won’t you? We certainly will!
So tired of writing papers that you’re starting to think of your professor’s demise? Relax, we’re only joking! However, even a joke is woven with the thread of truth, and the truth is that endless assignments are constantly nagging at you and keeping you up all night long.
‘Writing my papers is unbearable!’ you may think But you’re not alone… What if we told you that we know a magical place where professionals can write your essays so perfectly that even professors’ most sophisticated requirements will be met? You’ve probably already guessed that we’re talking about WritePaperFor.me — the most delightful, facilitating, and destressing custom paper-writing service!
We are not going to be shy about our wish to see you as our steady customer. As a result, we aren’t twiddling our thumbs but permanently improving our services; we carefully select writers who always bone up on their subjects and disciplines, and we won’t rest unless you’ve gotten your ideal paper(s). All your wishes become our unshakable rules!
Why would I ask you to write paper for me?
Despite the obvious and even natural resistance to the idea of paper writing in principle that may occur with any student, you may also ask yourself, ‘Why would I need you to help me write my paper?’ The answer to this question lies in the spectrum of your routine actions. It’s not surprising that studying becomes part of our lives, but sometimes we’ve just got too much going on!
When you write an essay or academic paper, you just do one of the numerous things you face daily or weekly. This part of your life consumes lots of energy and time, so how can you possibly get around to doing other things like having fun, working, playing sports, helping relatives, and spending time with friends?
People are social creatures, and it’s only natural of us to request help from experts.. That’s why we ask doctors, electricians, or plumbers to help us! They’re all specialists. Who writes essays for you better than you do? Right, people who write numerous essays every day. We are experts in academic writing, aimed at satisfying all your needs related to education.
You just hire a professional to get a paper written, like you normally do in other situations. Our team of writers know everything about writing your paper and can cope with assignments of any complexity and academic level. Well-researched and expertly-written papers are what we do for our customers, and we always do our work professionally so that you could kick back and enjoy your life to the fullest.
The undeniable benefits of our custom paper-writing service
Apart from a paper written in accordance with the highest standards, we provide a wide range of contributory advantages to make your life easier. Let’s take a closer look at them.
Round-the-Clock Support. Our paper-writing service works day and night to help you with all current issues. Our friendly support team is available whenever you need them, even if it’s the middle of the night. They will gladly guide you and answer all your questions on how to order customized papers or consult you about the matters at hand. Feel free to share your questions or concerns with them and get comprehensible answers.
High-Class Quality. ‘Will you write a paper for me that meets all requirements?’ This question is frequently asked by many students, and we always answer in the affirmative. Our main goal is to deliver a perfectly written paper the meets the highest possible writing standards. We don’t rest unless you are satisfied with our work. If you hire a paper writer online, we guarantee you that you get 100% original and plagiarism-free assignments of high quality.
Complete Anonymity. We value your privacy and use modern encryption systems to protect you online. We don’t collect any personal or payment details and provide all our customers with 100% anonymity. ‘Can you write a paper for me and let me stay anonymous?’ Of course, we can! We are here to help you, not to cause problems.
Fast Delivery. We completely understand how strict deadlines may be when it comes to writing your paper. Even if your paper is due tomorrow morning, you can always rely on us. Our writers meet all set deadlines unequivocally. This rule is ironclad! The offered range is wide and starts from 6 hours to 2 weeks. Which one to choose is totally up to you. On our part, we guarantee that our writers will deliver your order on time.
Free Revisions. Our mission is to hone your paper to perfection. That’s why we offer you free revisions to make everything ideal and according to your needs. Feel free to ask for revisions if there is something you would like to be changed. That’s how our paper writing service works.
Money-Back Guarantee. You can get up to a 100% refund if you are dissatisfied with our work. Nevertheless, we are completely sure of our writers’ professionalism and credibility that offer you hard-core loyalty to our guarantees.
Comprehensible Explanations. ‘Can someone write my paper for me and provide clarifications?’ This question arises from time to time. Naturally, we want you to be totally prepared for the upcoming battle with your professor. If you need to fill the gaps in your knowledge, you can always ask for clarifications related to your paper. Moreover, when you order ‘write my paper for me’ service, you can always turn to our support agents for assistance. They will be glad to provide you with the necessary information and comprehensible explanations.
Fast and Customer-Focused Solutions. ‘Is it possible to do my paper for me so that I don’t worry about it at all?’ It certainly is! We offer all-encompassing solutions to all your academic problems by defining issues, determining their causes, selecting proper alternatives, and ultimately solving them. You are free to do your favorite activities while we are taking care of ongoing matters. You can always rely on us when it comes to essay-writing online and taking an individual approach to every case.
Who will write my paper when I order it?
Another crucial advantage of our service is our writers. You may have asked yourself, ‘I’d like to pay someone to write a paper for me, but who exactly will that person be?’ Once you order a paper, our managers will choose the best writer based on your requirements. You’ll get a writer who is a true expert in the relevant subject, and a perfect fit is certain to be found due to our thorough procedure of selecting.
Every applicant passes a complex procedure of tests to become one of our permanent writers. First of all, they should provide their credentials. We need to make sure that any prospective writers we hire have the proper experience.. The next step resides in passing a series of tests related to grammar, in addition to subject and/or discipline. Every paper-writer must pass them to prove their competency and their selected field of expertise.
One more step includes writing a sample to prove the ability to research and write consistently. Moreover, we always set our heart on hiring only devoted writers. When you ask us to write your essay or other academic works, you can be sure that they always do their best to provide you with well-structured and properly-written papers of high quality.
The final chord is related to special aspects of academic paper-writing. It means that every writer is prepared to cite properly, use different styles, and so on, so you don’t have to be worried about formatting at all.
‘So, can they write an ideal paper for me?’ We answer in the affirmative because we select only the best writers for our customers. Approximately 11% of all applicants can pass the whole set of tests and are ready to help you. All writers are fully compensated for their work and are highly motivated to provide you with the best results.
We are online 24/7 so that you could monitor the process of paper-writing and contact us whenever necessary. Don’t forget that your satisfaction is our priority. Our writers fully focus on your order when it comes to the ‘write my paper’ procedure. Our managers will immediately send all the information to your writer if any corrections are required.

It’s time to write my paper! What should I do?
‘I am ready to pay to have a paper written! Where do I start?’ Our team hears these words every day. We really believe that every student should be happy. That’s why we offer you to look at the simple steps to make the process even more convenient.
- Fill in the comprehensible order form located on the main page of our website. If you need some help with it, feel free to contact our support team.
- Provide the necessary details, such as the topic, subject or discipline, paper format, your academic level, etc.
- Select the deadline, and we will strictly follow it.
- Pay the total price. Submit a preferred payment method. The full sum will be deposited into your account on our website. The money will be transferred to your writer in case you approve the paper.
- If you have additional materials provided by your professor or may simply assist in writing your paper, please attach them too. They will help the assigned writer meet your professor’s expectations.
Every paper we can write for you is expertly-researched, well-structured, and consistent. Take a look at some types of papers we can help you with:
- Dissertations
- Research papers
- Case studies
- Term papers
- Business plans, etc.
Questions like ‘I would like you to write a paper for me without destroying my reputation. Can you promise to do so?’ or ‘Can you write my paper for me cheap and fast?’ often arise, and we take pride that these options are included in the list. Your safety and anonymity are parts of our common priority, which is to make you fully satisfied with all offered services.
Moreover, our pricing policy is flexible and allows you to select the options that totally suit your needs at affordable prices. You will be pleased with the results and the amount of money spent on your order. Our managers and writers will do the rest according to the highest standards.
Don’t hesitate and hire a writer to work on your paper now!
We believe that students know what is best for them, and if you suppose that it is time to ‘write my paper right now,’ we will help you handle it. ‘Will you do my paper without any hesitation?’ Of course, we will. Our service has all the necessary prerequisites to complete assignments regardless of their difficulty, academic level, or the number of pages. We choose a writer who has vast experience and a breadth of knowledge related to your topic.
Our ‘write my paper for me’ service offers a wide range of extra features to make the ordering process even more pleasant and convenient. Unlike lots of other services, we provide formatting, bibliography, amendments, and a title page for free.
‘When you write my paper for me? Can I monitor the process?’ Naturally, you can. We understand that you may want to ensure that everything is going well. Furthermore, there may be situations when some corrections are needed. We believe that a tool like this can come in handy. The assigned writer will strictly follow your and your professor’s requirements to make sure that your paper is perfect.
‘Is it possible to write my essay from scratch?’ We don’t do just proofreading or editing. Our goal is to fully carry your burden of writing. When this or similar questions appear, we always assure our customers that our writers can do whatever they need. Apart from writing from scratch or editing and proofreading, our experts can effortlessly cope with problem-solving of all kinds;even sophisticated software assignments!
Our ‘write my paper for me’ service is good for everyone who wants to delegate paper-writing to professionals and save precious time that can be spent differently and in a more practical way. We want you to be happy by offering the great opportunity to forget about endless and boring assignments once and forever. You won’t miss anything if your papers become the concern of our professional writers.
Don’t waste your precious time browsing other services. We provide you with everything you need while you are enjoying yourself by doing things you really enjoy. ‘Write my paper then! Do my paper for me right now!’ If you are ready to exclaim these words with delight, we welcome you to our haven, a place where students spend their time serenely and never worry about papers! It’s your turn to have fun, whereas our mission is to provide you with the best papers delivered on time!
Questions our customers ask
Can someone write my paper for me.
Yes, we can. We have writers ready to cope with papers of any complexity. Just contact our specialists and let us help you.
Who can I pay to write a paper for me?
We will help you select a writer according to your needs. As soon as you hire our specialist, you’ll see a significant improvement in your grades.
Can I pay someone to write a paper for me?
Yes, you can. We have lots of professionals to choose from. We employ only well-qualified experts with vast experience in academic paper writing.
What website will write a paper for me?
WritePaperFor.me is the website you need. We offer a wide range of services to cover all your needs. Just place an order and provide instructions, and we will write a perfect paper for you.
Is it safe to use your paper writing service?
Our service is completely safe and anonymous. We don’t keep your personal and payment details and use the latest encryption systems to protect you.
What are you waiting for?
You are a couple of clicks away from tranquility at an affordable price!
A Distributed Systems Reading List
Introduction.
I often argue that the toughest thing about distributed systems is changing the way you think. The below is a collection of material I've found useful for motivating these changes.
Thought Provokers
Ramblings that make you think about the way you design. Not everything can be solved with big servers, databases and transactions.
- Harvest, Yield and Scalable Tolerant Systems - Real world applications of CAP from Brewer et al
- On Designing and Deploying Internet Scale Services - James Hamilton
- The Perils of Good Abstractions - Building the perfect API/interface is difficult
- Chaotic Perspectives - Large scale systems are everything developers dislike - unpredictable, unordered and parallel
- Data on the Outside versus Data on the Inside - Pat Helland
- Memories, Guesses and Apologies - Pat Helland
- SOA and Newton's Universe - Pat Helland
- Building on Quicksand - Pat Helland
- Why Distributed Computing? - Jim Waldo
- A Note on Distributed Computing - Waldo, Wollrath et al
- Stevey's Google Platforms Rant - Yegge's SOA platform experience
- Latency Exists, Cope! - Commentary on coping with latency and it's architectural impacts
- Latency - the new web performance bottleneck - not at all new (see Patterson ), but noteworthy
- The Tail At Scale - the latencychallenges inherent of dealing with latency in large scale systems
Somewhat about the technology but more interesting is the culture and organization they've created to work with it.
- A Conversation with Werner Vogels - Coverage of Amazon's transition to a service-based architecture
- Discipline and Focus - Additional coverage of Amazon's transition to a service-based architecture
- Vogels on Scalability
- SOA creates order out of chaos @ Amazon
Current "rocket science" in distributed systems.
- Chubby Lock Manager
- Google File System
- Data Management for Internet-Scale Single-Sign-On
- Dremel: Interactive Analysis of Web-Scale Datasets
- Large-scale Incremental Processing Using Distributed Transactions and Notifications
- Megastore: Providing Scalable, Highly Available Storage for Interactive Services - Smart design for low latency Paxos implementation across datacentres.
- Spanner - Google's scalable, multi-version, globally-distributed, and synchronously-replicated database.
- Photon - Fault-tolerant and Scalable Joining of Continuous Data Streams. Joins are tough especially with time-skew, high availability and distribution.
- Mesa: Geo-Replicated, Near Real-Time, Scalable Data Warehousing - Data warehousing system that stores critical measurement data related to Google's Internet advertising business.
Consistency Models
Key to building systems that suit their environments is finding the right tradeoff between consistency and availability.
- CAP Conjecture - Consistency, Availability, Parition Tolerance cannot all be satisfied at once
- Consistency, Availability, and Convergence - Proves the upper bound for consistency possible in a typical system
- CAP Twelve Years Later: How the "Rules" Have Changed - Eric Brewer expands on the original tradeoff description
- Consistency and Availability - Vogels
- Eventual Consistency - Vogels
- Avoiding Two-Phase Commit - Two phase commit avoidance approaches
- 2PC or not 2PC, Wherefore Art Thou XA? - Two phase commit isn't a silver bullet
- Life Beyond Distributed Transactions - Helland
- If you have too much data, then 'good enough' is good enough - NoSQL, Future of data theory - Pat Helland
- Starbucks doesn't do two phase commit - Asynchronous mechanisms at work
- You Can't Sacrifice Partition Tolerance - Additional CAP commentary
- Optimistic Replication - Relaxed consistency approaches for data replication
Papers that describe various important elements of distributed systems design.
- Distributed Computing Economics - Jim Gray
- Rules of Thumb in Data Engineering - Jim Gray and Prashant Shenoy
- Fallacies of Distributed Computing - Peter Deutsch
- Impossibility of distributed consensus with one faulty process - also known as FLP [access requires account and/or payment, a free version can be found here ]
- Unreliable Failure Detectors for Reliable Distributed Systems. A method for handling the challenges of FLP
- Lamport Clocks - How do you establish a global view of time when each computer's clock is independent
- The Byzantine Generals Problem
- Lazy Replication: Exploiting the Semantics of Distributed Services
- Scalable Agreement - Towards Ordering as a Service
- Scalable Eventually Consistent Counters over Unreliable Networks - Scalable counting is tough in an unreliable world
Languages and Tools
Issues of distributed systems construction with specific technologies.
- Programming Distributed Erlang Applications: Pitfalls and Recipes - Building reliable distributed applications isn't as simple as merely choosing Erlang and OTP.
Infrastructure
- Principles of Robust Timing over the Internet - Managing clocks is essential for even basics such as debugging
- Consistent Hashing and Random Trees
- Amazon's Dynamo Storage Service
Paxos Consensus
Understanding this algorithm is the challenge. I would suggest reading "Paxos Made Simple" before the other papers and again afterward.
- The Part-Time Parliament - Leslie Lamport
- Paxos Made Simple - Leslie Lamport
- Paxos Made Live - An Engineering Perspective - Chandra et al
- Revisiting the Paxos Algorithm - Lynch et al
- How to build a highly available system with consensus - Butler Lampson
- Reconfiguring a State Machine - Lamport et al - changing cluster membership
- Implementing Fault-Tolerant Services Using the State Machine Approach: a Tutorial - Fred Schneider
Other Consensus Papers
- Mencius: Building Efficient Replicated State Machines for WANs - consensus algorithm for wide-area network
- In Search of an Understandable Consensus Algorithm - The extended version of the RAFT paper, an alternative to PAXOS.
Gossip Protocols (Epidemic Behaviours)
- How robust are gossip-based communication protocols?
- Astrolabe: A Robust and Scalable Technology For Distributed Systems Monitoring, Management, and Data Mining
- Epidemic Computing at Cornell
- Fighting Fire With Fire: Using Randomized Gossip To Combat Stochastic Scalability Limits
- Bi-Modal Multicast
- ACM SIGOPS Operating Systems Review - Gossip-based computer networking
- SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol
- Chord : A Scalable Peer-to-peer Lookup Protocol for Internet Applications
- Kademlia : A Peer-to-peer Information System Based on the XOR Metric
- Pastry : Scalable, decentralized object location and routing for large-scale peer-to-peer systems
- PAST : A large-scale, persistent peer-to-peer storage utility - storage system atop Pastry
- SCRIBE : A large-scale and decentralised application-level multicast infrastructure - wide area messaging atop Pastry
Advanced Distributed Systems
Research Seminar at Columbia University
- --> Blog -->