• IEEE CS Standards
  • Career Center
  • Subscribe to Newsletter
  • IEEE Standards

computer science case study

  • For Industry Professionals
  • For Students
  • Launch a New Career
  • Membership FAQ
  • Membership FAQs
  • Membership Grades
  • Special Circumstances
  • Discounts & Payments
  • Distinguished Contributor Recognition
  • Grant Programs
  • Find a Local Chapter
  • Find a Distinguished Visitor
  • Find a Speaker on Early Career Topics
  • Technical Communities
  • Collabratec (Discussion Forum)
  • Start a Chapter
  • My Subscriptions
  • My Referrals
  • Computer Magazine
  • ComputingEdge Magazine
  • Let us help make your event a success. EXPLORE PLANNING SERVICES
  • Events Calendar
  • Calls for Papers
  • Conference Proceedings
  • Conference Highlights
  • Top 2024 Conferences
  • Conference Sponsorship Options
  • Conference Planning Services
  • Conference Organizer Resources
  • Virtual Conference Guide
  • Get a Quote
  • CPS Dashboard
  • CPS Author FAQ
  • CPS Organizer FAQ
  • Find the latest in advanced computing research. VISIT THE DIGITAL LIBRARY
  • Open Access
  • Tech News Blog
  • Author Guidelines
  • Reviewer Information
  • Guest Editor Information
  • Editor Information
  • Editor-in-Chief Information
  • Volunteer Opportunities
  • Video Library
  • Member Benefits
  • Institutional Library Subscriptions
  • Advertising and Sponsorship
  • Code of Ethics
  • Educational Webinars
  • Online Education
  • Certifications
  • Industry Webinars & Whitepapers
  • Research Reports
  • Bodies of Knowledge
  • CS for Industry Professionals
  • Resource Library
  • Newsletters
  • Women in Computing
  • Digital Library Access
  • Organize a Conference
  • Run a Publication
  • Become a Distinguished Speaker
  • Participate in Standards Activities
  • Peer Review Content
  • Author Resources
  • Publish Open Access
  • Society Leadership
  • Boards & Committees
  • Local Chapters
  • Governance Resources
  • Conference Publishing Services
  • Chapter Resources
  • About the Board of Governors
  • Board of Governors Members
  • Diversity & Inclusion
  • Open Volunteer Opportunities
  • Award Recipients
  • Student Scholarships & Awards
  • Nominate an Election Candidate
  • Nominate a Colleague
  • Corporate Partnerships
  • Conference Sponsorships & Exhibits
  • Advertising
  • Recruitment
  • Publications
  • Education & Career

CiSE Case Studies in Translational Computer Science

Call for department articles.

CiSE ‘s newest department explores how findings in fundamental research in computer, computational, and data science translate to technologies, solutions, or practice for the benefit of science, engineering, and society. Specifically, each department article will highlight impactful translational research examples in which research has successfully moved from the laboratory to the field and into the community. The goal is to improve understanding of underlying approaches, explore challenges and lessons learned, with the overarching aim to formulate translational research processes that are broadly applicable.

Computing and data are increasingly essential to the research process across all areas of science and engineering and are key catalysts for impactful advances and breakthroughs. Consequently, translating fundamental advances in computer, computational, and data science help to ensure that these emerging insights, discoveries, and innovations are realized.  

Translational Research in Computer and Computational Sciences [1][2] refers the bridging of foundational and use-inspired (applied) research with the delivery and deployment of its outcomes to the target community, and supports bi-directional benefit in which delivery and deployment process informs the research. 

Call for Department Contributions: We seek short papers that align with our recommended structure and detail the following aspects of the described research:

  • Overview: A description of the research, what problem does it address, who is the target user community, what are the key innovations and attributes, etc.
  • Translation Process: What was the process used to move the research from the laboratory to the application? How were outcomes fed back into the research, and over what time period did this occur? How was the translation supported? 
  • I mpact: What is the impact of the translated research, both on the CCDS research as well as the target domain(s)? 
  • Lessons Learned: What are the lessons learned in terms of both the research and the translation process? What were the challenges faced?
  • Conclusion: Based on your experience, do you have suggestions for processes or support structures that would have made the translation more effective?

CiSE Department articles are typically up to 3,000 words (including abstract, references, author biographies, and tables/figures [which count as 250 words each]), and are only reviewed by the department editors.

To pitch or submit a department article, please contact the editors directly by emailing:

  • Manish Parashar  
  • David Abramson  

Additional information for authors can be found here.

  • D. Abramson and M. Parashar, “Translational Research in Computer Science,” Computer , vol. 52, no. 9, pp. 16-23, Sept. 2019, doi: 10.1109/MC.2019.2925650.
  • D. Abramson, M. Parashar, and P. Arzberger. “Translation computer science – Overview of the special issue,” J. Computational Sci. , 2020, ISSN 1877-7503, https://www.sciencedirect.com/journal/journal-of-computational-science/special-issue/10P6T48JS7B.

Recommended by IEEE Computer Society

computer science case study

Let's Have Fun Programming

computer science case study

Fostering Excellence: A Conversation with Willy Zwaenepoel, Harry H. Goode Memorial Award Recipient

computer science case study

From Code Readability to Reusability, Here’s How Terraform Locals Enhance IaC Management

computer science case study

Cutting Cloud Costs: Key Strategies to Keep Budgets in Check

computer science case study

Pioneering the Way: A Conversation with Franck Cappello, Charles Babbage Award Recipient

computer science case study

A Visionary in AI: A Conversation with Yun Raymond Fu, Technical Achievement Award Recipient

computer science case study

Diversity, Equity, and Inclusion - The Centerpiece of Cyber

computer science case study

Artificial Intelligence Transforming the Culinary Industry

National Academies Press: OpenBook

Global Dimensions of Intellectual Property Rights in Science and Technology (1993)

Chapter: 12 a case study on computer programs, 12 a case study on computer programs.

PAMELA SAMUELSON

HISTORICAL OVERVIEW

Phase 1: the 1950s and early 1960s.

When computer programs were first being developed, proprietary rights issues were not of much concern. Software was often developed in academic or other research settings. Much progress in the programming field occurred as a result of informal exchanges of software among academics and other researchers. In the course of such exchanges, a program developed by one person might be extended or improved by a number of colleagues who would send back (or on to others) their revised versions of the software. Computer manufacturers in this period often provided software to customers of their machines to make their major product (i.e., computers) more commercially attractive (which caused the software to be characterized as "bundled" with the hardware).

To the extent that computer programs were distributed in this period by firms for whom proprietary rights in software were important, programs tended to be developed and distributed through restrictive trade secret licensing agreements. In general, these were individually negotiated with customers. The licensing tradition of the early days of the software industry has framed some of the industry expectations about proprietary rights issues, with implications for issues still being litigated today.

In the mid-1960s, as programs began to become more diverse and complex, as more firms began to invest in the development of programs, and as

some began to envision a wider market for software products, a public dialogue began to develop about what kinds of proprietary rights were or should be available for computer programs. The industry had trade secrecy and licensing protection, but some thought more legal protection might be needed.

Phase 2: Mid-1960s and 1970s

Copyright law was one existing intellectual property system into which some in the mid-1960s thought computer programs might potentially fit. Copyright had a number of potential advantages for software: it could provide a relatively long term of protection against unauthorized copying based on a minimal showing of creativity and a simple, inexpensive registration process. 1 Copyright would protect the work's ''expression," but not the "ideas" it contained. Others would be free to use the same ideas in other software, or to develop independently the same or a similar work. All that would be forbidden was the copying of expression from the first author's work.

In 1964, the U.S. Copyright Office considered whether to begin accepting registration of computer programs as copyrightable writings. It decided to do so, but only under its "rule of doubt" and then only on condition that a full text of the program be deposited with the office, which would be available for public review. 2

The Copyright Office's doubt about the copyrightability of programs

arose from a 1908 Supreme Court decision that had held that a piano roll was not an infringing "copy" of copyrighted music, but rather part of a mechanical device. 3 Mechanical devices (and processes) have traditionally been excluded from the copyright domain. 4 Although the office was aware that in machine-readable form, computer programs had a mechanical character, they also had a textual character, which was why the Copyright Office decided to accept them for registration.

The requirement that the full text of the source code of a program be deposited in order for a copyright in the program to be registered was consistent with a long-standing practice of the Copyright Office, 5 as well as with what has long been perceived to be the constitutional purpose of copyright, namely, promoting the creation and dissemination of knowledge. 6

Relatively few programs, however, were registered with the Copyright Office under this policy during the 1960s and 1970s. 7 Several factors may have contributed to this. Some firms may have been deterred by the requirement that the full text of the source code be deposited with the office and made available for public inspection, because this would have dispelled its trade secret status. Some may have thought a registration certificate issued under the rule of doubt might not be worth much. However, the main reason for the low number of copyright registrations was probably that a mass market in software still lay in the future. Copyright is useful mainly to protect mass-marketed products, and trade secrecy is quite adequate for programs with a small number of distributed copies.

Shortly after the Copyright Office issued its policy on the registrability of computer programs, the U.S. Patent Office issued a policy statement concerning its views on the patentability of computer programs. It rejected the idea that computer programs, or the intellectual processes that might be embodied in them, were patentable subject matter. 8 Only if a program was

claimed as part of a traditionally patentable industrial process (i.e., those involving the transformation of matter from one physical state to another) did the Patent Office intend to issue patents for program-related innovations. 9

Patents are typically available for inventive advances in machine designs or other technological products or processes on completion of a rigorous examination procedure conducted by a government agency, based on a detailed specification of what the claimed invention is, how it differs from the prior art, and how the invention can be made. Although patent rights are considerably shorter in duration than copyrights, patent rights are considered stronger because no one may make, use, or sell the claimed invention without the patent owner's permission during the life of the patent. (Patents give rights not just against someone who copies the protected innovation, but even against those who develop it independently.) Also, much of what copyright law would consider to be unprotectable functional content ("ideas") if described in a book can be protected by patent law.

The Patent Office's policy denying the patentability of program innovations was consistent with the recommendations of a presidential commission convened to make suggestions about how the office could more effectively cope with an "age of exploding technology." The commission also recommended that patent protection not be available for computer program innovations. 10

Although there were some appellate decisions in the late 1960s and

early 1970s overturning Patent Office rejections of computer program-related applications, few software developers looked to the patent system for protection after two U.S. Supreme Court decisions in the 1970s ruled that patent protection was not available for algorithms. 11 These decisions were generally regarded as calling into question the patentability of all software innovations, although some continued to pursue patents for their software innovations notwithstanding these decisions. 12

As the 1970s drew to a close, despite the seeming availability of copyright protection for computer programs, the software industry was still relying principally on trade secrecy and licensing agreements. Patents seemed largely, if not totally, unavailable for program innovations. Occasional suggestions were made that a new form of legal protection for computer programs should be devised, but the practice of the day was trade secrecy and licensing, and the discourse about additional protection was focused overwhelmingly on copyright.

During the 1960s and 1970s the computer science research community grew substantially in size. Although more software was being distributed under restrictive licensing agreements, much software, as well as innovative ideas about how to develop software, continued to be exchanged among researchers in this field. The results of much of this research were published and discussed openly at research conferences. Toward the end of this period, a number of important research ideas began to make their way into commercial projects, but this was not seen as an impediment to research by computer scientists because the commercial ventures tended to arise after the research had been published. Researchers during this period did not, for the most part, seek proprietary rights in their software or software ideas, although other rewards (such as tenure or recognition in the field) were available to those whose innovative research was published.

Phase 3: The 1980s

Four significant developments in the 1980s changed the landscape of the software industry and the intellectual property rights concerns of those who developed software. Two were developments in the computing field; two were legal developments.

The first significant computing development was the introduction to the market of the personal computer (PC), a machine made possible by improvements in the design of semiconductor chips, both as memory storage

devices and as processing units. A second was the visible commercial success of some early PC applications software—most notably, Visicalc, and then Lotus 1-2-3—which significantly contributed to the demand for PCs as well as making other software developers aware that fortunes could be made by selling software. With these developments, the base for a large mass market in software was finally in place.

During this period, computer manufacturers began to realize that it was to their advantage to encourage others to develop application programs that could be executed on their brand of computers. One form of encouragement involved making available to software developers whatever interface information would be necessary for development of application programs that could interact with the operating system software provided with the vendor's computers (information that might otherwise have been maintained as a trade secret). Another form of encouragement was pioneered by Apple Computer, which recognized the potential value to consumers (and ultimately to Apple) of having a relatively consistent "look and feel" to the applications programs developed to run on Apple computers. Apple developed detailed guidelines for applications developers to aid in the construction of this consistent look and feel.

The first important legal development—one which was in place when the first successful mass-marketed software applications were introduced into the market—was passage of amendments to the copyright statute in 1980 to resolve the lingering doubt about whether copyright protection was available for computer programs. 13 These amendments were adopted on the recommendation of the National Commission on New Technological Uses of Copyrighted Works (CONTU), which Congress had established to study a number of "new technology" issues affecting copyrighted works. The CONTU report emphasized the written nature of program texts, which made them seem so much like written texts that had long been protected by copyright law. The CONTU report noted the successful expansion of the boundaries of copyright over the years to take in other new technology products, such as photographs, motion pictures, and sound recordings. It predicted that computer programs could also be accommodated in the copyright regime. 14

Copyright law was perceived by CONTU as the best alternative for protection of computer programs under existing intellectual property regimes. Trade secrecy, CONTU noted, was inherently unsuited for mass-marketed products because the first sale of the product on the open market would dispel the secret. CONTU observed that Supreme Court rulings had cast

doubts on the availability of patent protection for software. CONTU's confidence in copyright protection for computer programs was also partly based on an economic study it had commissioned. This economic study regarded copyright as suitable for protecting software against unauthorized copying after sale of the first copy of it in the marketplace, while fostering the development of independently created programs. The CONTU majority expressed confidence that judges would be able to draw lines between protected expression and unprotected ideas embodied in computer programs, just as they did routinely with other kinds of copyrighted works.

A strong dissenting view was expressed by the novelist John Hersey, one of the members of the CONTU commission, who regarded programs as too mechanical to be protected by copyright law. Hersey warned that the software industry had no intention to cease the use of trade secrecy for software. Dual assertion of trade secrecy and copyright seemed to him incompatible with copyright's historical function of promoting the dissemination of knowledge.

Another development during this period was that the Copyright Office dropped its earlier requirement that the full text of source code be deposited with it. Now only the first and last 25 pages of source code had to be deposited to register a program. The office also decided it had no objection if the copyright owner blacked out some portions of the deposited source code so as not to reveal trade secrets. This new policy was said to be consistent with the new copyright statute that protected both published and unpublished works alike, in contrast to the prior statutes that had protected mainly published works. 15

With the enactment of the software copyright amendments, software developers had a legal remedy in the event that someone began to mass-market exact or near-exact copies of the developers' programs in competition with the owner of the copyright in the program. Unsurprisingly, the first software copyright cases involved exact copying of the whole or substantial portions of program code, and in them, the courts found copyright infringement. Copyright litigation in the mid- and late 1980s began to grapple with questions about what, besides program code, copyright protects about computer programs. Because the "second-generation" litigation affects the current legal framework for the protection of computer programs, the issues raised by these cases will be dealt with in the next section.

As CONTU Commissioner Hersey anticipated, software developers did not give up their claims to the valuable trade secrets embodied in their programs after enactment of the 1980 amendments to the copyright statute.

To protect those secrets, developers began distributing their products in machine-readable form, often relying on "shrink-wrap" licensing agreements to limit consumer rights in the software. 16 Serious questions exist about the enforceability of shrink-wrap licenses, some because of their dubious contractual character 17 and some because of provisions that aim to deprive consumers of rights conferred by the copyright statute. 18 That has not led, however, to their disuse.

One common trade secret-related provision of shrink-wrap licenses, as well as of many negotiated licenses, is a prohibition against decompilation or disassembly of the program code. Such provisions are relied on as the basis of software developer assertions that notwithstanding the mass distribution of a program, the program should be treated as unpublished copyrighted works as to which virtually no fair use defenses can be raised. 19

Those who seek to prevent decompilation of programs tend to assert that since decompilation involves making an unauthorized copy of the program, it constitutes an improper means of obtaining trade secrets in the program. Under this theory, decompilation of program code results in three unlawful acts: copyright infringement (because of the unauthorized copy made during the decompilation process), trade secret misappropriation (because the secret has been obtained by improper means, i.e., by copyright

infringement), and a breach of the licensing agreement (which prohibits decompilation).

Under this theory, copyright law would become the legal instrument by which trade secrecy could be maintained in a mass-marketed product, rather than a law that promotes the dissemination of knowledge. Others regard decompilation as a fair use of a mass-marketed program and, shrink-wrap restrictions to the contrary, as unenforceable. This issue has been litigated in the United States, but has not yet been resolved definitively. 20 The issue remains controversial both within the United States and abroad.

A second important legal development in the early 1980s—although one that took some time to become apparent—was a substantial shift in the U.S. Patent and Trademark Office (PTO) policy concerning the patentability of computer program-related inventions. This change occurred after the 1981 decision by the U.S. Supreme Court in Diamond v. Diehr, which ruled that a rubber curing process, one element of which was a computer program, was a patentable process. On its face, the Diehr decision seemed consistent with the 1966 Patent Office policy and seemed, therefore, not likely to lead to a significant change in patent policy regarding software innovations. 21 By the mid-1980s, however, the PTO had come to construe the Court's ruling broadly and started issuing a wide variety of computer program-related patents. Only "mathematical algorithms in the abstract" were now thought unpatentable. Word of the PTO's new receptivity to software patent applications spread within the patent bar and gradually to software developers.

During the early and mid-1980s, both the computer science field and the software industry grew very significantly. Innovative ideas in computer science and related research fields were widely published and disseminated. Software was still exchanged by researchers, but a new sensitivity to intellectual property rights began to arise, with general recognition that unauthorized copying of software might infringe copyrights, especially if done with a commercial purpose. This was not perceived as presenting a serious obstacle to research, for it was generally understood that a reimplementation of the program (writing one's own code) would be

noninfringing. 22 Also, much of the software (and ideas about software) exchanged by researchers during the early and mid-1980s occurred outside the commercial marketplace. Increasingly, the exchanges took place with the aid of government-subsidized networks of computers.

Software firms often benefited from the plentiful availability of research about software, as well as from the availability of highly trained researchers who could be recruited as employees. Software developers began investing more heavily in research and development work. Some of the results of this research was published and/or exchanged at technical conferences, but much was kept as a trade secret and incorporated in new products.

By the late 1980s, concerns began arising in the computer science and related fields, as well as in the software industry and the legal community, about the degree of intellectual property protection needed to promote a continuation of the high level of innovation in the software industry. 23 Although most software development firms, researchers, and manufacturers of computers designed to be compatible with the leading firms' machines seemed to think that copyright (complemented by trade secrecy) was adequate to their needs, the changing self-perception of several major computer manufacturers led them to push for more and "stronger" protection. (This concern has been shared by some successful software firms whose most popular programs were being "cloned" by competitors.) Having come to realize that software was where the principal money of the future would be made, these computer firms began reconceiving themselves as software developers. As they did so, their perspective on software protection issues changed as well. If they were going to invest in software development, they wanted "strong'' protection for it. They have, as a consequence, become among the most vocal advocates of strong copyright, as well as of patent protection for computer programs. 24

CURRENT LEGAL APPROACHES IN THE UNITED STATES

Software developers in the United States are currently protecting software products through one or more of the following legal protection mechanisms: copyright, trade secret, and/or patent law. Licensing agreements often supplement these forms of protection. Some software licensing agreements are negotiated with individual customers; others are printed forms found under the plastic shrink-wrap of a mass-marketed package. 25 Few developers rely on only one form of legal protection. Developers seem to differ somewhat on the mix of legal protection mechanisms they employ as well as on the degree of protection they expect from each legal device.

Although the availability of intellectual property protection has unquestionably contributed to the growth and prosperity of the U.S. software industry, some in the industry and in the research community are concerned that innovation and competition in this industry will be impeded rather than enhanced if existing intellectual property rights are construed very broadly. 26 Others, however, worry that courts may not construe intellectual property rights broadly enough to protect what is most valuable about software, and if too little protection is available, there may be insufficient incentives to invest in software development; hence innovation and competition may be retarded through underprotection. 27 Still others (mainly lawyers) are confident that the software industry will continue to prosper and grow under the existing intellectual property regimes as the courts "fill out" the details of software protection on a case-by-case basis as they have been doing for the past several years. 28

What's Not Controversial

Although the main purpose of the discussion of current approaches is to give an overview of the principal intellectual property issues about which there is controversy in the technical and legal communities, it may be wise to begin with a recognition of a number of intellectual property issues as to which there is today no significant controversy. Describing only the aspects of the legal environment as to which controversies exist would risk creating a misimpression about the satisfaction many software developers and lawyers have with some aspects of intellectual property rights they now use to protect their and their clients' products.

One uncontroversial aspect of the current legal environment is the use of copyright to protect against exact or near-exact copying of program code. Another is the use of copyright to protect certain aspects of user interfaces, such as videogame graphics, that are easily identifiable as "expressive" in a traditional copyright sense. Also relatively uncontroversial is the use of copyright protection for low-level structural details of programs, such as the instruction-by-instruction sequence of the code. 29

The use of trade secret protection for the source code of programs and other internally held documents concerning program design and the like is similarly uncontroversial. So too is the use of licensing agreements negotiated with individual customers under which trade secret software is made available to licensees when the number of licensees is relatively small and when there is a reasonable prospect of ensuring that licensees will take adequate measures to protect the secrecy of the software. Patent protection for industrial processes that have computer program elements, such as the rubber curing process in the Diehr case, is also uncontroversial.

Substantial controversies exist, however, about the application of copyright law to protect other aspects of software, about patent protection for other kinds of software innovations, about the enforceability of shrink-wrap licensing agreements, and about the manner in which the various forms of legal protection seemingly available to software developers interrelate in the protection of program elements (e.g., the extent to which copyright and trade secret protection can coexist in mass-marketed software).

Controversies Arising From Whelan v. Jaslow

Because quite a number of the most contentious copyright issues arise from the Whelan v. Jaslow decision, this subsection focuses on that case. In the summer of 1986, the Third Circuit Court of Appeals affirmed a trial court decision in favor of Whelan Associates in its software copyright lawsuit against Jaslow Dental Laboratories. 30 Jaslow's program for managing dental lab business functions used some of the same data and file structures as Whelan's program (to which Jaslow had access), and five subroutines of Jaslow's program functioned very similarly to Whelan's. The trial court inferred that there were substantial similarities in the underlying structure of the two programs based largely on a comparison of similarities in the user interfaces of the two programs, even though user interface similarities were not the basis for the infringement claim. Jaslow's principal defense was that Whelan's copyright protected only against exact copying of program code, and since there were no literal similarities between the programs, no copyright infringement had occurred.

In its opinion on this appeal, the Third Circuit stated that copyright protection was available for the "structure, sequence, and organization" (sso) of a program, not just the program code. (The court did not distinguish between high- and low-level structural features of a program.) The court analogized copyright protection for program sso to the copyright protection available for such things as detailed plot sequences in novels. The court also emphasized that the coding of a program was a minor part of the cost of development of a program. The court expressed fear that if copyright protection was not accorded to sso, there would be insufficient incentives to invest in the development of software.

The Third Circuit's Whelan decision also quoted with approval from that part of the trial court opinion stating that similarities in the manner in which programs functioned could serve as a basis for a finding of copyright infringement. Although recognizing that user interface similarities did not necessarily mean that two programs had similar underlying structures (thereby correcting an error the trial judge had made), the appellate court thought that user interface similarities might still be some evidence of underlying structural similarities. In conjunction with other evidence in the case, the Third Circuit decided that infringement had properly been found.

Although a number of controversies have arisen out of the Whelan opinion, the aspect of the opinion that has received the greatest attention is the test the court used for determining copyright infringement in computer

program cases. The " Whelan test" regards the general purpose or function of a program as its unprotectable "idea." All else about the program is, under the Whelan test, protectable "expression'' unless there is only one or a very small number of ways to achieve the function (in which case idea and expression are said to be "merged," and what would otherwise be expression is treated as an idea). The sole defense this test contemplates for one who has copied anything more detailed than the general function of another program is that copying that detail was "necessary" to perform that program function. If there is in the marketplace another program that does the function differently, courts applying the Whelan test have generally been persuaded that the copying was unjustified and that what was taken must have been "expressive."

Although the Whelan test has been used in a number of subsequent cases, including the well-publicized Lotus v. Paperback case, 31 some judges have rejected it as inconsistent with copyright law and tradition, or have found ways to distinguish the Whelan case when employing its test would have resulted in a finding of infringement. 32

Many commentators assert that the Whelan test interprets copyright

protection too expansively. 33 Although the court in Whelan did not seem to realize it, the Whelan test would give much broader copyright protection to computer programs than has traditionally been given to novels and plays, which are among the artistic and fanciful works generally accorded a broader scope of protection than functional kinds of writings (of which programs would seem to be an example). 34 The Whelan test would forbid reuse of many things people in the field tend to regard as ideas. 35 Some commentators have suggested that because innovation in software tends to be of a more incremental character than in some other fields, and especially given the long duration of copyright protection, the Whelan interpretation of the scope of copyright is likely to substantially overprotect software. 36

One lawyer-economist, Professor Peter Menell, has observed that the model of innovation used by the economists who did the study of software for CONTU is now considered to be an outmoded approach. 37 Those econo-

mists focused on a model that considered what incentives would be needed for development of individual programs in isolation. Today, economists would consider what protection would be needed to foster innovation of a more cumulative and incremental kind, such as has largely typified the software field. In addition, the economists on whose work CONTU relied did not anticipate the networking potential of software and consequently did not study what provisions the law should make in response to this phenomenon. Menell has suggested that with the aid of their now more refined model of innovation, economists today might make somewhat different recommendations on software protection than they did in the late 1970s for CONTU. 38

As a matter of copyright law, the principal problem with the Whelan test is its incompatibility with the copyright statute, the case law properly interpreting it, and traditional principles of copyright law. The copyright statute provides that not only ideas, but also processes, procedures, systems, and methods of operation, are unprotectable elements of copyrighted works. 39 This provision codifies some long-standing principles derived from U.S. copyright case law, such as the Supreme Court's century-old Baker v. Selden decision that ruled that a second author did not infringe a first author's copyright when he put into his own book substantially similar ledger sheets to those in the first author's book. The reason the Court gave for its ruling was that Selden's copyright did not give him exclusive rights to the bookkeeping system, but only to his explanation or description of it. 40 The ordering and arrangement of columns and headings on the ledger sheets were part of the system; to get exclusive rights in this, the Court said that Selden would have to get a patent.

The statutory exclusion from copyright protection for methods, processes, and the like was added to the copyright statute in part to ensure that the scope of copyright in computer programs would not be construed too broadly. Yet, in cases in which the Whelan test has been employed, the courts have tended to find the presence of protectable "expression" when they perceive there to be more than a couple of ways to perform some function, seeming not to realize that there may be more than one "method" or "system" or "process" for doing something, none of which is properly protected by copyright law. The Whelan test does not attempt to exclude

methods or processes from the scope of copyright protection, and its recognition of functionality as a limitation on the scope of copyright is triggered only when there are no alternative ways to perform program functions.

Whelan has been invoked by plaintiffs not only in cases involving similarities in the internal structural design features of programs, but also in many other kinds of cases. sso can be construed to include internal interface specifications of a program, the layout of elements in a user interface, and the sequence of screen displays when program functions are executed, among other things. Even the manner in which a program functions can be said to be protectable by copyright law under Whelan . The case law on these issues and other software issues is in conflict, and resolution of these controversies cannot be expected very soon.

Traditionalist Versus Strong Protectionist View of What Copyright Law Does and Does Not Protect in Computer Programs

Traditional principles of copyright law, when applied to computer programs, would tend to yield only a "thin" scope of protection for them. Unquestionably, copyright protection would exist for the code of the program and the kinds of expressive displays generated when program instructions are executed, such as explanatory text and fanciful graphics, which are readily perceptible as traditional subject matters of copyright law. A traditionalist would regard copyright protection as not extending to functional elements of a program, whether at a high or low level of abstraction, or to the functional behavior that programs exhibit. Nor would copyright protection be available for the applied know-how embodied in programs, including program logic. 41 Copyright protection would also not be available for algorithms or other structural abstractions in software that are constituent elements of a process, method, or system embodied in a program.

Efficient ways of implementing a function would also not be protectable by copyright law under the traditionalist view, nor would aspects of software design that make the software easier to use (because this bears on program functionality). The traditionalist would also not regard making a limited number of copies of a program to study it and extract interface information or other ideas from the program as infringing conduct, because computer programs are a kind of work for which it is necessary to make a copy to "read" the text of the work. 42 Developing a program that incorporates interface information derived from decompilation would also, in the traditionalist view, be noninfringing conduct.

If decompilation and the use of interface information derived from the study of decompiled code were to be infringing acts, the traditionalist would regard copyright as having been turned inside out, for instead of promoting the dissemination of knowledge as has been its traditional purpose, copyright law would become the principal means by which trade secrets would be maintained in widely distributed copyrighted works. Instead of protecting only expressive elements of programs, copyright would become like a patent: a means by which to get exclusive rights to the configuration of a machine—without meeting stringent patent standards or following the strict procedures required to obtain patent protection. This too would seem to turn copyright inside out.

Because interfaces, algorithms, logic, and functionalities of programs are aspects of programs that make them valuable, it is understandable that some of those who seek to maximize their financial returns on software investments have argued that "strong" copyright protection is or should be available for all valuable features of programs, either as part of program sso or under the Whelan "there's-another-way-to-do-it" test. 43 Congress seems to have intended for copyright law to be interpreted as to programs on a case-by-case basis, and if courts determine that valuable features should be considered "expressive," the strong protectionists would applaud this common law evolution. If traditional concepts of copyright law and its purposes do not provide an adequate degree of protection for software innovation, they see it as natural that copyright should grow to provide it. Strong protectionists tend to regard traditionalists as sentimental Luddites who do not appreciate that what matters is for software to get the degree of protection it needs from the law so that the industry will thrive.

Although some cases, most notably the Whelan and Lotus decisions, have adopted the strong protectionist view, traditionalists will tend to regard these decisions as flawed and unlikely to be affirmed in the long run because they are inconsistent with the expressed legislative intent to have traditional principles of copyright law applied to software. Some copyright traditionalists favor patent protection for software innovations on the ground that the valuable functional elements of programs do need protection to create proper incentives for investing in software innovations, but that this protection should come from patent law, not from copyright law.

Controversy Over "Software Patents"

Although some perceive patents as a way to protect valuable aspects of programs that cannot be protected by copyright law, those who argue for patents for software innovations do not rely on the "gap-filling" concern alone. As a legal matter, proponents of software patents point out that the patent statute makes new, nonobvious, and useful "processes" patentable. Programs themselves are processes; they also embody processes. 44 Computer hardware is clearly patentable, and it is a commonplace in the computing field that any tasks for which a program can be written can also be implemented in hardware. This too would seem to support the patentability of software.

Proponents also argue that protecting program innovations by patent law is consistent with the constitutional purpose of patent law, which is to promote progress in the "useful arts." Computer program innovations are technological in nature, which is said to make them part of the useful arts to which the Constitution refers. Proponents insist that patent law has the same potential for promoting progress in the software field as it has had for promoting progress in other technological fields. They regard attacks on patents for software innovations as reflective of the passing of the frontier in the software industry, a painful transition period for some, but one necessary if the industry is to have sufficient incentives to invest in software development.

Some within the software industry and the technical community, however, oppose patents for software innovations. 45 Opponents tend to make two kinds of arguments against software patents, often without distinguishing between them. One set of arguments questions the ability of the PTO to deal well with software patent applications. Another set raises more fundamental questions about software patents. Even assuming that the PTO could begin to do a good job at issuing software patents, some question whether

innovation in the software field will be properly promoted if patents become widely available for software innovations. The main points of both sets of arguments are developed below.

Much of the discussion in the technical community has focused on "bad" software patents that have been issued by the PTO. Some patents are considered bad because the innovation was, unbeknownst to the PTO, already in the state of the art prior to the date of invention claimed in the patent. Others are considered bad because critics assert that the innovations they embody are too obvious to be deserving of patent protection. Still others are said to be bad because they are tantamount to a claim for performing a particular function by computer or to a claim for a law of nature, neither of which is regarded as patentable subject matter. Complaints abound that the PTO, after decades of not keeping up with developments in this field, is so far out of touch with what has been and is happening in the field as to be unable to make appropriate judgments on novelty and nonobviousness issues. Other complaints relate to the office's inadequate classification scheme for software and lack of examiners with suitable education and experience in computer science and related fields to make appropriate judgments on software patent issues. 46

A somewhat different point is made by those who assert that the software industry has grown to its current size and prosperity without the aid of patents, which causes them to question the need for patents to promote innovation in this industry. 47 The highly exclusionary nature of patents (any use of the innovation without the patentee's permission is infringing) contrasts sharply with the tradition of independent reinvention in this field. The high expense associated with obtaining and enforcing patents raises concerns about the increased barriers to entry that may be created by the patenting of software innovations. Since much of the innovation in this industry has come from small firms, policies that inhibit entry by small firms may not promote innovation in this field in the long run. Similar questions arise as to whether patents will promote a proper degree of innovation in an incremental industry such as the software industry. It would be possible to undertake an economic study of conditions that have promoted and are promoting progress in the software industry to serve as a basis for a policy decision on software patents, but this has not been done to date.

Some computer scientists and mathematicians are also concerned about patents that have been issuing for algorithms, 48 which they regard as dis-

coveries of fundamental truths that should not be owned by anyone. Because any use of a patented algorithm within the scope of the claims—whether by an academic or a commercial programmer, whether one knew of the patent or not—may be an infringement, some worry that research on algorithms will be slowed down by the issuance of algorithm patents. One mathematical society has recently issued a report opposing the patenting of algorithms. 49 Others, including Richard Stallman, have formed a League for Programming Freedom.

There is substantial case law to support the software patent opponent position, notwithstanding the PTO change in policy. 50 Three U.S. Supreme Court decisions have stated that computer program algorithms are unpatentable subject matter. Other case law affirms the unpatentability of processes that involve the manipulation of information rather than the transformation of matter from one physical state to another.

One other concern worth mentioning if both patents and copyrights are used to protect computer program innovations is whether a meaningful boundary line can be drawn between the patent and copyright domains as regards software. 51 A joint report of the U.S. PTO and the Copyright Office optimistically concludes that no significant problems will arise from the coexistence of these two forms of protection for software because copyright law will only protect program "expression" whereas patent law will only protect program "processes." 52

Notwithstanding this report, I continue to be concerned with the patent/ copyright interface because of the expansive interpretations some cases, particularly Whelan, have given to the scope of copyright protection for programs. This prefigures a significant overlap of copyright and patent law as to software innovations. This overlap would undermine important economic and public policy goals of the patent system, which generally leaves in the public domain those innovations not novel or nonobvious enough to be patented. Mere "originality" in a copyright sense is not enough to make an innovation in the useful arts protectable under U.S. law. 53

A concrete example may help illustrate this concern. Some patent lawyers report getting patents on data structures for computer programs.

The Whelan decision relied in part on similarities in data structures to prove copyright infringement. Are data structures "expressive" or "useful"? When one wants to protect a data structure of a program by copyright, does one merely call it part of the sso of the program, whereas if one wants to patent it, one calls it a method (i.e., a process) of organizing data for accomplishing certain results? What if anything does copyright's exclusion from protection of processes embodied in copyrighted works mean as applied to data structures? No clear answer to these questions emerges from the case law.

Nature of Computer Programs and Exploration of a Modified Copyright Approach

It may be that the deeper problem is that computer programs, by their very nature, challenge or contradict some fundamental assumptions of the existing intellectual property regimes. Underlying the existing regimes of copyright and patent law are some deeply embedded assumptions about the very different nature of two kinds of innovations that are thought to need very different kinds of protection owing to some important differences in the economic consequences of their protection. 54

In the United States, these assumptions derive largely from the U.S. Constitution, which specifically empowers Congress "to promote the progress of science [i.e., knowledge] and useful arts [i.e., technology], by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries." 55 This clause has historically been parsed as two separate clauses packaged together for convenience: one giving Congress power to enact laws aimed at promoting the progress of knowledge by giving authors exclusive rights in their writings, and the other giving Congress power to promote technological progress by giving inventors exclusive rights in their technological discoveries. Copyright law implements the first power, and patent law the second.

Owing partly to the distinctions between writings and machines, which the constitutional clause itself set up, copyright law has excluded machines

and other technological subject matters from its domain. 56 Even when described in a copyrighted book, an innovation in the useful arts was considered beyond the scope of copyright protection. The Supreme Court's Baker v. Selden decision reflects this view of the constitutional allocation. Similarly, patent law has historically excluded printed matter (i.e., the contents of writings) from its domain, notwithstanding the fact that printed matter may be a product of a manufacturing process. 57 Also excluded from the patent domain have been methods of organizing, displaying, and manipulating information (i.e., processes that might be embodied in writings, for example mathematical formulas), notwithstanding the fact that "processes" are named in the statute as patentable subject matter. They were not, however, perceived to be "in the useful arts" within the meaning of the constitutional clause.

The constitutional clause has been understood as both a grant of power and a limitation on power. Congress cannot, for example, grant perpetual patent rights to inventors, for that would violate the "limited times" provision of the Constitution. Courts have also sometimes ruled that Congress cannot, under this clause, grant exclusive rights to anyone but authors and inventors. In the late nineteenth century, the Supreme Court struck down the first federal trademark statute on the ground that Congress did not have power to grant rights under this clause to owners of trademarks who were neither "authors" nor "inventors." 58 A similar view was expressed in last year's Feist Publications v. Rural Telephone Services decision by the Supreme Court, which repeatedly stated that Congress could not constitutionally protect the white pages of telephone books through copyright law because to be an "author" within the meaning of the Constitution required some creativity in expression that white pages lacked. 59

Still other Supreme Court decisions have suggested that Congress could not constitutionally grant exclusive rights to innovators in the useful arts who were not true "inventors." 60 Certain economic assumptions are connected with this view, including the assumption that more modest innovations in the useful arts (the work of a mere mechanic) will be forthcoming without the grant of the exclusive rights of a patent, but that the incentives of patent rights are necessary to make people invest in making significant technological advances and share the results of their work with the public instead of keeping them secret.

One reason the United States does not have a copyright-like form of protection for industrial designs, as do many other countries, is because of lingering questions about the constitutionality of such legislation. In addition, concerns exist that the economic consequences of protecting uninventive technological advances will be harmful. So powerful are the prevailing patent and copyright paradigms that when Congress was in the process of considering the adoption of a copyright-like form of intellectual property protection for semiconductor chip designs, there was considerable debate about whether Congress had constitutional power to enact such a law. It finally decided it did have such power under the commerce clause, but even then was not certain.

As this discussion reveals, the U.S. intellectual property law has long assumed that something is either a writing (in which case it is protectable, if at all, by copyright law) or a machine (in which case it is protectable, if at all, by patent law), but cannot be both at the same time. However, as Professor Randall Davis has so concisely said, software is "a machine whose medium of construction happens to be text." 61 Davis regards the act of creating computer programs as inevitably one of both authorship and invention. There may be little or nothing about a computer program that is not, at base, functional in nature, and nothing about it that does not have roots in the text. Because of this, it will inevitably be difficult to draw meaningful boundaries for patents and copyrights as applied to computer programs.

Another aspect of computer programs that challenges the assumptions of existing intellectual property systems is reflected in another of Professor Davis's observations, namely, that "programs are not only texts; they also behave." 62 Much of the dynamic behavior of computer programs is highly functional in nature. If one followed traditional copyright principles, this functional behavior—no matter how valuable it might be—would be considered outside the scope of copyright law. 63 Although the functionality of program behavior might seem at first glance to mean that patent protection would be the obvious form of legal protection for it, as a practical matter, drafting patent claims that would adequately capture program behavior as an invention is infeasible. There are at least two reasons for this: it is partly because programs are able to exhibit such a large number and variety of states that claims could not reasonably cover them, and partly because of

the ''gestalt"-like character of program behavior, something that makes a more copyright-like approach desirable.

Some legal scholars have argued that because of their hybrid character as both writings and machines, computer programs need a somewhat different legal treatment than either traditional patent or copyright law would provide. 64 They have warned of distortions in the existing legal systems likely to occur if one attempts to integrate such a hybrid into the traditional systems as if it were no different from the traditional subject matters of these systems. 65 Even if the copyright and patent laws could be made to perform their tasks with greater predictability than is currently the case, these authors warn that such regimes may not provide the kind of protection that software innovators really need, for most computer programs will be legally obvious for patent purposes, and programs are, over time, likely to be assimilated within copyright in a manner similar to that given to "factual" and "functional" literary works that have only "thin" protection against piracy. 66

Professor Reichman has reported on the recurrent oscillations between states of under- and overprotection when legal systems have tried to cope with another kind of legal hybrid, namely, industrial designs (sometimes referred to as "industrial art"). Much the same pattern seems to be emerging in regard to computer programs, which are, in effect, "industrial literature." 67

The larger problems these hybrids present is that of protecting valuable forms of applied know-how embodied in incremental innovation that cannot successfully be maintained as trade secrets:

[M]uch of today's most advanced technology enjoys a less favorable competitive position than that of conventional machinery because the unpatentable, intangible know-how responsible for its commercial value becomes embodied in products that are distributed on the open market. A product of the new technologies, such as a computer program, an integrated circuit

design, or even a biogenetically altered organism may thus bear its know-how on its face, a condition that renders it as vulnerable to rapid appropriation by second-comers as any published literary or artistic work.

From this perspective, a major problem with the kinds of innovative know-how underlying important new technologies is that they do not lend themselves to secrecy even when they represent the fruit of enormous investment in research and development. Because third parties can rapidly duplicate the embodied information and offer virtually the same products at lower prices than those of the originators, there is no secure interval of lead time in which to recuperate the originators' initial investment or their losses from unsuccessful essays, not to mention the goal of turning a profit. 68

From a behavioral standpoint, investors in applied scientific know-how find the copyright paradigm attractive because of its inherent disposition to supply artificial lead time to all comers without regard to innovative merit and without requiring originators to preselect the products that are most worthy of protection. 69

Full copyright protection, however, with its broad notion of equivalents geared to derivative expressions of an author's personality is likely to disrupt the workings of the competitive market for industrial products. For this and other reasons, Professor Reichman argues that a modified copyright approach to the protection of computer programs (and other legal hybrids) would be a preferable framework for protecting the applied know-how they embody than either the patent or the copyright regime would presently provide. Similar arguments can be made for a modified form of copyright protection for the dynamic behavior of programs. A modified copyright approach might involve a short duration of protection for original valuable functional components of programs. It could be framed to supplement full copyright protection for program code and traditionally expressive elements of text and graphics displayed when programs execute, features of software that do not present the same dangers of competitive disruption from full copyright protection.

The United States is, in large measure, already undergoing the development of a sui generis law for protection of computer software through case-by-case decisions in copyright lawsuits. Devising a modified copyright approach to protecting certain valuable components that are not suitably protected under the current copyright regime would have the advantage of allowing a conception of the software protection problem as a whole, rather than on a piecemeal basis as occurs in case-by-case litigation in which the

skills of certain attorneys and certain facts may end up causing the law to develop in a skewed manner. 70

There are, however, a number of reasons said to weigh against sui generis legislation for software, among them the international consensus that has developed on the use of copyright law to protect software and the trend toward broader use of patents for software innovations. Some also question whether Congress would be able to devise a more appropriate sui generis system for protecting software than that currently provided by copyright. Some are also opposed to sui generis legislation for new technology products such as semiconductor chips and software on the ground that new intellectual property regimes will make intellectual property law more complicated, confusing, and uncertain.

Although there are many today who ardently oppose sui generis legislation for computer programs, these same people may well become among the most ardent proponents of such legislation if the U.S. Supreme Court, for example, construes the scope of copyright protection for programs to be quite thin, and reiterates its rulings in Benson, Flook, and Diehr that patent protection is unavailable for algorithms and other information processes embodied in software.

INTERNATIONAL PERSPECTIVES

After adopting copyright as a form of legal protection for computer programs, the United States campaigned vigorously around the world to persuade other nations to protect computer programs by copyright law as well. These efforts have been largely successful. Although copyright is now an international norm for the protection of computer software, the fine details of what copyright protection for software means, apart from protection against exact copying of program code, remain somewhat unclear in other nations, just as in the United States.

Other industrialized nations have also tended to follow the U.S. lead concerning the protection of computer program-related inventions by patent

law. 71 Some countries that in the early 1960s were receptive to the patenting of software innovations became less receptive after the Gottschalk v. Benson decision by the U.S. Supreme Court. Some even adopted legislation excluding computer programs from patent protection. More recently, these countries are beginning to issue more program-related patents, once again paralleling U.S. experience, although as in the United States, the standards for patentability of program-related inventions are somewhat unclear. 72 If the United States and Japan continue to issue a large number of computer program-related patents, it seems quite likely other nations will follow suit.

There has been strong pressure in recent years to include relatively specific provisions about intellectual property issues (including those affecting computer programs) as part of the international trade issues within the framework of the General Agreement on Tariffs and Trade (GATT). 73 For a time, the United States was a strong supporter of this approach to resolution of disharmonies among nations on intellectual property issues affecting software. The impetus for this seems to have slackened, however, after U.S. negotiators became aware of a lesser degree of consensus among U.S. software developers on certain key issues than they had thought was the case. Since the adoption of its directive on software copyright law, the European Community (EC) has begun pressing for international adoption of its position on a number of important software issues, including its copyright rule on decompilation of program code.

There is a clear need, given the international nature of the market for software, for a substantial international consensus on software protection issues. However, because there are so many hotly contested issues concerning the extent of copyright and the availability of patent protection for computer programs yet to be resolved, it may be premature to include very specific rules on these subjects in the GATT framework.

Prior to the adoption of the 1991 European Directive on the Protection of Computer Programs, there was general acceptance in Europe of copyright as a form of legal protection for computer programs. A number of nations had interpreted existing copyright statutes as covering programs. Others took legislative action to extend copyright protection to software. There was, however, some divergence in approach among the member nations of the EC in the interpretation of copyright law to computer software. 74

France, for example, although protecting programs under its copyright law, put software in the same category as industrial art, a category of work that is generally protected in Europe for 25 years instead of the life plus 50-year term that is the norm for literary and other artistic works. German courts concluded that to satisfy the "originality" standard of its copyright law, the author of a program needed to demonstrate that the program was the result of more than an average programmer's skill, a seemingly patentlike standard. In 'addition, Switzerland (a non-EC member but European nonetheless) nearly adopted an approach that treated both semiconductor chip designs and computer programs under a new copyright-like law.

Because of these differences and because it was apparent that computer programs would become an increasingly important item of commerce in the European Community, the EC undertook in the late 1980s to develop a policy concerning intellectual property protection for computer programs to which member nations should harmonize their laws. There was some support within the EC for creating a new law for the protection of software, but the directorate favoring a copyright approach won this internal struggle over what form of protection was appropriate for software.

In December 1988 the EC issued a draft directive on copyright protection for computer programs. This directive was intended to spell out in considerable detail in what respects member states should have uniform rules on copyright protection for programs. (The European civil law tradition generally prefers specificity in statutory formulations, in contrast with the U.S. common law tradition, which often prefers case-by-case adjudication of disputes as a way to fill in the details of a legal protection scheme.)

The draft directive on computer programs was the subject of intense debate within the European Community, as well as the object of some intense lobbying by major U.S. firms who were concerned about a number of issues, but particularly about what rule would be adopted concerning decompilation of program code and protection of the internal interfaces of

programs. Some U.S. firms, among them IBM Corp., strongly opposed any provision that would allow decompilation of program code and sought to have interfaces protected; other U.S. firms, such as Sun Microsystems, sought a rule that would permit decompilation and would deny protection to internal interfaces. 75

The final EC directive published in 1991 endorses the view that computer programs should be protected under member states' copyright laws as literary works and given at least 50 years of protection against unauthorized copying. 76 It permits decompilation of program code only if and to the extent necessary to obtain information to create an interoperable program. The inclusion in another program of information necessary to achieve interoperability seems, under the final directive, to be lawful.

The final EC directive states that "ideas" and "principles" embodied in programs are not protectable by copyright, but does not provide examples of what these terms might mean. The directive contains no exclusion from protection of such things as processes, procedures, methods of operation, and systems, as the U.S. statute provides. Nor does it clearly exclude protection of algorithms, interfaces, and program logic, as an earlier draft would have done. Rather, the final directive indicates that to the extent algorithms, logic, and interfaces are ideas, they are unprotectable by copyright law. In this regard, the directive seems, quite uncharacteristically for its civil law tradition, to leave much detail about how copyright law will be applied to programs to be resolved by litigation.

Having just finished the process of debating the EC directive about copyright protection of computer programs, intellectual property specialists in the EC have no interest in debating the merits of any sui generis approach to software protection, even though the only issue the EC directive really resolved may have been that of interoperability. Member states will likely have to address another controversial issue—whether or to what extent user interests in standardization of user interfaces should limit the scope of copyright

protection for programs—as they act on yet another EC directive, one that aims to standardize user interfaces of computer programs. Some U.S. firms may perceive this latter directive as an effort to appropriate valuable U.S. product features.

Japan was the first major industrialized nation to consider adoption of a sui generis approach to the protection of computer programs. 77 Its Ministry of International Trade and Industry (MITI) published a proposal that would have given 15 years of protection against unauthorized copying to computer programs that could meet a copyright-like originality standard under a copyright-like registration regime. MITI attempted to justify its proposed different treatment for computer programs as one appropriate to the different character of programs, compared with traditional copyrighted works. 78 The new legal framework was said to respond and be tailored to the special character of programs. American firms, however, viewed the MITI proposal, particularly its compulsory license provisions, as an effort by the Japanese to appropriate the valuable products of the U.S. software industry. Partly as a result of U.S. pressure, the MITI proposal was rejected by the Japanese government, and the alternative copyright proposal made by the ministry with jurisdiction over copyright law was adopted.

Notwithstanding their inclusion in copyright law, computer programs are a special category of protected work under Japanese law. Limiting the scope of copyright protection for programs is a provision indicating that program languages, rules, and algorithms are not protected by copyright law. 79 Japanese case law under this copyright statute has proceeded along lines similar to U.S. case law, with regard to exact and near-exact copying of program code and graphical aspects of videogame programs, 80 but there have been some Japanese court decisions interpreting the exclusion from protection provisions in a manner seemingly at odds with some U.S. Decisions.

The Tokyo High Court, for example, has opined that the processing flow of a program (an aspect of a program said to be protectable by U.S. law in the Whelan case) is an algorithm within the meaning of the copyright limitation provision. 81 Another seems to bear out Professor Karjala's prediction that Japanese courts would interpret the programming language limitation to permit firms to make compatible software. 82 There is one Japanese decision that can be read to prohibit reverse engineering of program code, but because this case involved not only disassembly of program code but also distribution of a clearly infringing program, the legality of intermediate copying to discern such things as interface information is unclear in Japan. 83

Other Nations

The United States has been pressing a number of nations to give "proper respect" to U.S. intellectual property products, including computer programs. In some cases, as in its dealings with the People's Republic of China, the United States has been pressing for new legislation to protect software under copyright law. In some cases, as in its dealings with Thailand, the United States has been pressing for more vigorous enforcement of intellectual property laws as they affect U.S. intellectual property products. In other cases, as in its dealings with Brazil, the United States pressed for repeal of sui generis legislation that disadvantaged U.S. software producers, compared with Brazilian developers. The United States has achieved some success in these efforts. Despite these successes, piracy of U.S.-produced software and other intellectual property products remains a substantial source of concern.

FUTURE CHALLENGES

Many of the challenges posed by use of existing intellectual property laws to protect computer programs have been discussed in previous sections. This may, however, only map the landscape of legal issues of widespread concern today. Below are some suggestions about issues as to which computer programs may present legal difficulties in the future.

Advanced Software Systems

It has thus far been exceedingly difficult for the legal system to resolve even relatively simple disputes about software intellectual property rights, such as those involved in the Lotus v. Paperback Software case. This does not bode well for how the courts are likely to deal with more complex problems presented by more complex software in future cases. The difficulties arise partly from the lack of familiarity of judges with the technical nature of computers and software, and partly from the lack of close analogies within the body of copyright precedents from which resolutions of software issues might be drawn. The more complex the software, the greater is the likelihood that specially trained judges will be needed to resolve intellectual property disputes about the software. Some advanced software systems are also likely to be sufficiently different from traditional kinds of copyrighted works that the analogical distance between the precedents and a software innovation may make it difficult to predict how copyright law should be applied to it. What copyright protection should be available, for example, to a user interface that responds to verbal commands, gestures, or movements of eyeballs?

Digital Media

The digital medium itself may require adaptation of the models underlying existing intellectual property systems. 84 Copyright law is built largely on the assumption that authors and publishers can control the manufacture and distribution of copies of protected works emanating from a central source. The ease with which digital works can be copied, redistributed, and used by multiple users, as well as the compactness and relative invisibility of works in digital form, have already created substantial incentives for developers of digital media products to focus their commercialization efforts on controlling the uses of digital works, rather than on the distribution of copies, as has more commonly been the rule in copyright industries.

Rules designed for controlling the production and distribution of copies may be difficult to adapt to a system in which uses need to be controlled. Some digital library and hypertext publishing systems seem to be designed to bypass copyright law (and its public policy safeguards, such as the fair use rule) and establish norms of use through restrictive access licensing

agreements. 85 Whether the law will eventually be used to regulate conditions imposed on access to these systems, as it has regulated access to such communication media as broadcasting, remains to be seen. However, the increasing convergence of intellectual property policy, broadcast and telecommunications policy, and other aspects of information policy seems inevitable.

There are already millions of people connected to networks of computers, who are thereby enabled to communicate with one another with relative ease, speed, and reliability. Plans are afoot to add millions more and to allow a wide variety of information services to those connected to the networks, some of which are commercial and some of which are noncommercial in nature. Because networks of this type and scope are a new phenomenon, it would seem quite likely that some new intellectual property issues will arise as the use of computer networks expands. The more commercial the uses of the networks, the more likely intellectual property disputes are to occur.

More of the content distributed over computer networks is copyrighted than its distributors seem to realize, but even as to content that has been recognized as copyrighted, there is a widespread belief among those who communicate over the net that at least noncommercial distributions of content—no matter the number of recipients—are "fair uses" of the content. Some lawyers would agree with this; others would not. Those responsible for the maintenance of the network may need to be concerned about potential liability until this issue is resolved.

A different set of problems may arise when commercial uses are made of content distributed over the net. Here the most likely disputes are those concerning how broad a scope of derivative work rights copyright owners should have. Some owners of copyrights can be expected to resist allowing anyone but themselves (or those licensed by them) to derive any financial benefit from creating a product or service that is built upon the value of their underlying work. Yet value-added services may be highly desirable to consumers, and the ability of outsiders to offer these products and services may spur beneficial competition. At the moment, the case law generally regards a copyright owner's derivative work right as infringed only if a recognizable block of expression is incorporated into another work. 86 How-

ever, the ability of software developers to provide value-added products and services that derive value from the underlying work without copying expression from it may lead some copyright owners to seek to extend the scope of derivative work rights.

Patents and Information Infrastructure of the Future

If patents are issued for all manner of software innovations, they are likely to play an important role in the development of the information infrastructure of the future. Patents have already been issued for hypertext navigation systems, for such things as latent semantic indexing algorithms, and for other software innovations that might be used in the construction of a new information infrastructure. Although it is easy to develop a list of the possible pros and cons of patent protection in this domain, as in the more general debate about software patents, it is worth noting that patents have not played a significant role in the information infrastructure of the past or of the present. How patents would affect the development of the new information infrastructure has not been given the study this subject may deserve.

Conflicts Between Information Haves and Have-Nots on an International Scale

When the United States was a developing nation and a net importer of intellectual property products, it did not respect copyright interests of any authors but its own. Charles Dickens may have made some money from the U.S. tours at which he spoke at public meetings, but he never made a dime from the publication of his works in the United States. Now that the United States is a developed nation and a net exporter of intellectual property products, its perspective on the rights of developing nations to determine for themselves what intellectual property rights to accord to the products of firms of the United States and other developed nations has changed. Given the greater importance nowadays of intellectual property products, both to the United States and to the world economy, it is foreseeable that there will be many occasions on which developed and developing nations will have disagreements on intellectual property issues.

The United States will face a considerable challenge in persuading other nations to subscribe to the same detailed rules that it has for dealing with intellectual property issues affecting computer programs. It may be easier for the United States to deter outright ''piracy" (unauthorized copying of the whole or substantially the whole of copyrighted works) of U.S. intellectual property products than to convince other nations that they must adopt the same rules as the United States has for protecting software.

It is also well for U.S. policymakers and U.S. firms to contemplate the possibility that U.S. firms may not always have the leading position in the world market for software products that they enjoy today. When pushing for very "strong" intellectual property protection for software today in the expectation that this will help to preserve the U.S. advantage in the world market, U.S. policymakers should be careful not to push for adoption of rules today that may substantially disadvantage them in the world market of the future if, for reasons not foreseen today, the United States loses the lead it currently enjoys in the software market.

As technological developments multiply around the globe—even as the patenting of human genes comes under serious discussion—nations, companies, and researchers find themselves in conflict over intellectual property rights (IPRs). Now, an international group of experts presents the first multidisciplinary look at IPRs in an age of explosive growth in science and technology.

This thought-provoking volume offers an update on current international IPR negotiations and includes case studies on software, computer chips, optoelectronics, and biotechnology—areas characterized by high development cost and easy reproducibility. The volume covers these and other issues:

  • Modern economic theory as a basis for approaching international IPRs.
  • U.S. intellectual property practices versus those in Japan, India, the European Community, and the developing and newly industrializing countries.
  • Trends in science and technology and how they affect IPRs.
  • Pros and cons of a uniform international IPRs regime versus a system reflecting national differences.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Fostering ethical thinking in computing

Press contact :.

Four stock images arranged in a rectangle: a photo of a person with glow-in-the-dark paint splattered on her face, an aerial photo of New York City at night, photo of a statue of a blind woman holding up scales and a sword, and an illustrated eye with a human silhouette in the pupil

Previous image Next image

Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in the world has often not been apparent until many years later.

As part of the efforts in Social and Ethical Responsibilities of Computing (SERC) within the MIT Stephen A. Schwarzman College of Computing, a new case studies series examines social, ethical, and policy challenges of present-day efforts in computing with the aim of facilitating the development of responsible “habits of mind and action” for those who create and deploy computing technologies.

“Advances in computing have undeniably changed much of how we live and work. Understanding and incorporating broader social context is becoming ever more critical,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “This case study series is designed to be a basis for discussions in the classroom and beyond, regarding social, ethical, economic, and other implications so that students and researchers can pursue the development of technology across domains in a holistic manner that addresses these important issues.”

A modular system

By design, the case studies are brief and modular to allow users to mix and match the content to fit a variety of pedagogical needs. Series editors David Kaiser and Julie Shah, who are the associate deans for SERC, structured the cases primarily to be appropriate for undergraduate instruction across a range of classes and fields of study.

“Our goal was to provide a seamless way for instructors to integrate cases into an existing course or cluster several cases together to support a broader module within a course. They might also use the cases as a starting point to design new courses that focus squarely on themes of social and ethical responsibilities of computing,” says Kaiser, the Germeshausen Professor of the History of Science and professor of physics.

Shah, an associate professor of aeronautics and astronautics and a roboticist who designs systems in which humans and machines operate side by side, expects that the cases will also be of interest to those outside of academia, including computing professionals, policy specialists, and general readers. In curating the series, Shah says that “we interpret ‘social and ethical responsibilities of computing’ broadly to focus on perspectives of people who are affected by various technologies, as well as focus on perspectives of designers and engineers.”

The cases are not limited to a particular format and can take shape in various forms — from a magazine-like feature article or Socratic dialogues to choose-your-own-adventure stories or role-playing games grounded in empirical research. Each case study is brief, but includes accompanying notes and references to facilitate more in-depth exploration of a given topic. Multimedia projects will also be considered. “The main goal is to present important material — based on original research — in engaging ways to broad audiences of non-specialists,” says Kaiser.

The SERC case studies are specially commissioned and written by scholars who conduct research centrally on the subject of the piece. Kaiser and Shah approached researchers from within MIT as well as from other academic institutions to bring in a mix of diverse voices on a spectrum of topics. Some cases focus on a particular technology or on trends across platforms, while others assess social, historical, philosophical, legal, and cultural facets that are relevant for thinking critically about current efforts in computing and data sciences.

The cases published in the inaugural issue place readers in various settings that challenge them to consider the social and ethical implications of computing technologies, such as how social media services and surveillance tools are built; the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings; the biases of risk prediction algorithms in the criminal justice system; and the politicization of data collection.

"Most of us agree that we want computing to work for social good, but which good? Whose good? Whose needs and values and worldviews are prioritized and whose are overlooked?” says Catherine D’Ignazio, an assistant professor of urban science and planning and director of the Data + Feminism Lab at MIT.

D’Ignazio’s case for the series, co-authored with Lauren Klein, an associate professor in the English and Quantitative Theory and Methods departments at Emory University, introduces readers to the idea that while data are useful, they are not always neutral. “These case studies help us understand the unequal histories that shape our technological systems as well as study their disparate outcomes and effects. They are an exciting step towards holistic, sociotechnical thinking and making."

Rigorously reviewed

Kaiser and Shah formed an editorial board composed of 55 faculty members and senior researchers associated with 19 departments, labs, and centers at MIT, and instituted a rigorous peer-review policy model commonly adopted by specialized journals. Members of the editorial board will also help commission topics for new cases and help identify authors for a given topic.

For each submission, the series editors collect four to six peer reviews, with reviewers mostly drawn from the editorial board. For each case, half the reviewers come from fields in computing and data sciences and half from fields in the humanities, arts, and social sciences, to ensure balance of topics and presentation within a given case study and across the series.

“Over the past two decades I’ve become a bit jaded when it comes to the academic review process, and so I was particularly heartened to see such care and thought put into all of the reviews," says Hany Farid, a professor at the University of California at Berkeley with a joint appointment in the Department of Electrical Engineering and Computer Sciences and the School of Information. “The constructive review process made our case study significantly stronger.”

Farid’s case, “The Dangers of Risk Prediction in the Criminal Justice System,” which he penned with Julia Dressel, recently a student of computer science at Dartmouth College, is one of the four commissioned pieces featured in the inaugural issue.

Cases are additionally reviewed by undergraduate volunteers, who help the series editors gauge each submission for balance, accessibility for students in multiple fields of study, and possibilities for adoption in specific courses. The students also work with them to create original homework problems and active learning projects to accompany each case study, to further facilitate adoption of the original materials across a range of existing undergraduate subjects.

“I volunteered to work with this group because I believe that it's incredibly important for those working in computer science to include thinking about ethics not as an afterthought, but integrated into every step and decision that is made, says Annie Snyder, a mathematical economics sophomore and a member of the MIT Schwarzman College of Computing’s Undergraduate Advisory Group. “While this is a massive issue to take on, this project is an amazing opportunity to start building an ethical culture amongst the incredibly talented students at MIT who will hopefully carry it forward into their own projects and workplace.”

New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year via the Knowledge Futures Group’s  PubPub platform . The SERC case studies are made available for free on an open-access basis, under Creative Commons licensing terms. Authors retain copyright, enabling them to reuse and republish their work in more specialized scholarly publications.

“It was important to us to approach this project in an inclusive way and lower the barrier for people to be able to access this content. These are complex issues that we need to deal with, and we hope that by making the cases widely available, more people will engage in social and ethical considerations as they’re studying and developing computing technologies,” says Shah.

Share this news article on:

Related links.

  • MIT Case Studies in Social and Ethical Responsibilities of Computing
  • Program in Science, Technology, and Society

Related Topics

  • Technology and society
  • Education, teaching, academics
  • Artificial intelligence
  • Computer science and technology
  • Diversity and inclusion
  • Program in STS
  • History of science
  • Aeronautical and astronautical engineering
  • Electrical Engineering & Computer Science (eecs)
  • Urban studies and planning
  • Human-computer interaction
  • MIT Sloan School of Management
  • School of Architecture and Planning
  • School of Humanities Arts and Social Sciences

Related Articles

Milo Phillips-Brown (left) and Marion Boulicault are part of a team working on transforming technology ethics education at MIT.

3 Questions: Marion Boulicault and Milo Phillips-Brown on ethics in a technical curriculum

MIT Schwarzman College of Computing leadership team (left to right) David Kaiser, Daniela Rus, Dan Huttenlocher, Julie Shah, and Asu Ozdaglar

A college for the computing age

woman in profile

Computing and artificial intelligence: Humanistic perspectives from MIT

(l-r) Julie Shah, Melissa Nobles

3 Questions: The social implications and responsibilities of computing

Previous item Next item

More MIT News

A tick held by a forceps, with blurry background

A protein found in human sweat may protect against Lyme disease

Read full story →

Jeehwan Kim sits on a chair in a dark lab filled with equipment, with a purple light in background.

Pushing material boundaries for better electronics

Photo illustration: At left, a photo of two puffins on a grassy cliff. At right is a heavily pixelated version of the photo, with a magnifying glass showing one of the puffins not pixelated but blurred. The pixelated/zoomed in area is in a mix of bright colors.

New algorithm unlocks high-resolution insights for computer vision

Headshots of Michael Birnbaum, Regina Barzilay, Brandon DeKosky, Seychelle Vos, and Ömer Yilmaz

Five MIT faculty members take on Cancer Grand Challenges

About 100 people pose for a photo in a large classroom. About a third are seated in the foreground and the rest stand

Unlocking the quantum future

Sally Kornbluth and Yet-Ming Chiang speak on stage.

Making the clean energy transition work for everyone

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • Dean’s Office
  • External Advisory Council
  • Computing Council
  • Extended Computing Council
  • Undergraduate Advisory Group
  • Break Through Tech AI
  • Building 45 Event Space
  • Infinite Mile Awards: Past Winners
  • Frequently Asked Questions
  • Undergraduate Programs
  • Graduate Programs
  • Educating Computing Bilinguals
  • Online Learning
  • Industry Programs
  • AI Policy Briefs
  • Envisioning the Future of Computing Prize
  • SERC Symposium 2023
  • SERC Case Studies
  • SERC Scholars Program
  • SERC Postdocs
  • Common Ground Subjects
  • For First-Year Students and Advisors
  • For Instructors: About Common Ground Subjects
  • Common Ground Award for Excellence in Teaching
  • New and Incoming Faculty
  • Faculty Resources
  • Faculty Openings
  • Search for: Search
  • MIT Homepage

Case Studies in Social and Ethical Responsibilities of Computing

computer science case study

The MIT Case Studies in Social and Ethical Responsibilities of Computing (SERC) aims to advance new efforts within and beyond the Schwarzman College of Computing. The specially commissioned and peer-reviewed cases are brief and intended to be effective for undergraduate instruction across a range of classes and fields of study, and may also be of interest for computing professionals, policy specialists, and general readers. The series editors interpret “social and ethical responsibilities of computing” broadly. Some cases focus closely on particular technologies, others on trends across technological platforms. Others examine social, historical, philosophical, legal, and cultural facets that are essential for thinking critically about present-day efforts in computing activities. Special efforts are made to solicit cases on topics ranging beyond the United States and that highlight perspectives of people who are affected by various technologies in addition to perspectives of designers and engineers. New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year and made available via the Knowledge Futures Group’s  PubPub  platform. The SERC case studies are made available for free on an open-access basis , under Creative Commons licensing terms. Authors retain copyright, enabling them to re-use and re-publish their work in more specialized scholarly publications. If you have suggestions for a new case study or comments on a published case, the series editors would like to hear from you! Please reach out to [email protected] .

Winter 2024

computer science case study

Integrals and Integrity: Generative AI Tries to Learn Cosmology

computer science case study

How Interpretable Is “Interpretable” Machine Learning?

computer science case study

AI’s Regimes of Representation: A Community-Centered Study of Text-to-Image Models in South Asia

Past issues, summer 2023.

computer science case study

Pretrial Risk Assessment on the Ground: Algorithms, Judgments, Meaning, and Policy by Cristopher Moore, Elise Ferguson, and Paul Guerin

computer science case study

To Search and Protect? Content Moderation and Platform Governance of Explicit Image Material by Mitali Thakor, Sumaiya Sabnam, Ransho Ueno, and Ella Zaslow

Winter 2023

computer science case study

Emotional Attachment to AI Companions and European Law by Claire Boine

computer science case study

Algorithmic Fairness in Chest X-ray Diagnosis: A Case Study by Haoran Zhang, Thomas Hartvigsen, and Marzyeh Ghassemi

computer science case study

The Right to Be an Exception to a Data-Driven Rule by Sarah H. Cen and Manish Raghavan

computer science case study

Twitter Gamifies the Conversation by C. Thi Nguyen, Meica Magnani, and Susan Kennedy

Summer 2022

computer science case study

“Porsche Girl”: When a Dead Body Becomes a Meme by Nadia de Vries

computer science case study

Patenting Bias: Algorithmic Race and Ethnicity Classifications, Proprietary Rights, and Public Data by Tiffany Nichols

computer science case study

Privacy and Paternalism: The Ethics of Student Data Collection by Kathleen Creel and Tara Dixit

Winter 2022

computer science case study

Differential Privacy and the 2020 US Census by Simson Garfinkel

computer science case study

The Puzzle of the Missing Robots by Suzanne Berger and Benjamin Armstrong

computer science case study

Protections for Human Subjects in Research: Old Models, New Needs? by Laura Stark

computer science case study

The Cloud Is Material: On the Environmental Impacts of Computation and Data Storage by Steven Gonzalez Monserrate

computer science case study

Algorithmic Redistricting and Black Representation in US Elections by Zachary Schutzman

Summer 2021

computer science case study

Hacking Technology, Hacking Communities: Codes of Conduct and Community Standards in Open Source by Christina Dunbar-Hester

computer science case study

Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle by Harini Suresh and John Guttag

computer science case study

Identity, Advertising, and Algorithmic Targeting: Or How (Not) to Target Your “Ideal User” by Tanya Kant

computer science case study

Wrestling with Killer Robots: The Benefits and Challenges of Artificial Intelligence for National Security by Erik Lin-Greenberg

computer science case study

Public Debate on Facial Recognition Technologies in China by Tristan G. Brown, Alexander Statman, and Celine Sui

Winter 2021

computer science case study

The Case of the Nosy Neighbors by Johanna Gunawan and Woodrow Hartzog

computer science case study

Who Collects the Data? A Tale of Three Maps by Catherine D’Ignazio and Lauren Klein

computer science case study

The Bias in the Machine: Facial Recognition Technology and Racial Disparities by Sidney Perkowitz

computer science case study

The Dangers of Risk Prediction in the Criminal Justice System by Julia Dressel and Hany Farid

External Assessment — Paper 3 #

Paper 3 asks a number of questions related to a pre-released case study .

Here is the case study for use in May and November 2024

Case studies from other years .

The maximum number of marks you can get for Paper 3 is 30. Your Paper 3 score translates into 20% of your final HL grade, see grade boundaries .

Grade boundaries #

Computer science course has a variety of assessment components. Paper 3 is marked using markschemes and markbands and assigned a numerical mark by the external examiner. Grade boundaries are then applied to determine the overall grade on the 1-7 scale for this component.

These boundaries have no impact on your final grade. However, they may be used to estimate the difficulty of the component.

Higher Level #

Computer science

Computer science previously formed a subject in group 5 of the Diploma Programme curriculum but now lies within group 4. As such, it is regarded as a science, alongside biology, chemistry, design technology, physics, environmental systems and societies and sports, exercise and health science.

 This group change is significant as it means DP students can now select computer science as their group 4 subject rather than having to select it in addition to mathematics as was previously the case. 

The IB computer science course is a rigorous and practical problem-solving discipline. Features and benefits of the curriculum and assessment of are as follows: 

Learn more about computer science in a DP workshop for teachers . 

Computer science subject brief

Subject briefs are short two-page documents providing an outline of the course. Read the standard level (SL) and/or higher level (HL) subject brief below. 

computer science case study

We use cookies on this site. By continuing to use this website, you consent to our use of these cookies.   Read more about cookies

Help | Advanced Search

Computer Science > Computation and Language

Title: enhancing llm factual accuracy with rag to counter hallucinations: a case study on domain-specific queries in private knowledge-bases.

Abstract: We proposed an end-to-end system design towards utilizing Retrieval Augmented Generation (RAG) to improve the factual accuracy of Large Language Models (LLMs) for domain-specific and time-sensitive queries related to private knowledge-bases. Our system integrates RAG pipeline with upstream datasets processing and downstream performance evaluation. Addressing the challenge of LLM hallucinations, we finetune models with a curated dataset which originates from CMU's extensive resources and annotated with the teacher model. Our experiments demonstrate the system's effectiveness in generating more accurate answers to domain-specific and time-sensitive inquiries. The results also revealed the limitations of fine-tuning LLMs with small-scale and skewed datasets. This research highlights the potential of RAG systems in augmenting LLMs with external datasets for improved performance in knowledge-intensive tasks. Our code and models are available on Github.

Submission history

Access paper:.

  • Download PDF
  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

University Case Studies

Mills college, stanford university, carnegie mellon university.

  • U.C. Berkeley: Reentering the Pipeline
  • Women Enrollment Committee Report, MIT Dept. of Electrical Engineering and Computer Science

The Women's Science and Engineering Network (WSEN) pairs up female undergraduate and graduate students in computer science in mentor relationships. The Stanford Undergraduate Engineering Handbook states, "networking with a mentor promotes self-confidence and assertiveness; helps students clarify their career goals; teaches them how to interact better with faculty and peers; and offers them the support needed for dealing with the pressures of being in male-dominated fields." The Stanford WSEN brings undergraduates and graduate students and faculty through receptions, workshops, panel discussion, quarterly programs, field trips, as well as a monthly newsletter. Such relationships not only encourage female undergraduate students by having graduate students act as role models, but also graduate students feel more respected and accomplished (Pearl, et al. 52).

Incoming Freshmen in CS at CMU: Percentage of Women Students

U.c. berkely: reentering the pipeline.

Over 170 women have participated in the program since the program's inceptions in 1984. Women whose undergraduate degrees were in fields like biology, drama, classics, and mathematics have gone on to attain Ph.D.'s in computer science. Unfortunately, the passage of Proposition 209 in California has imposed legal constraints on the Computer Science Reentry Program. The program is currently under review. (Humphreys.)

  • Publications
  • Conferences & Events
  • Professional Learning
  • Science Standards
  • Awards & Competitions
  • Daily Do Lesson Plans
  • Free Resources
  • American Rescue Plan
  • For Preservice Teachers
  • NCCSTS Case Collection
  • Partner Jobs in Education
  • Interactive eBooks+
  • Digital Catalog
  • Regional Product Representatives
  • e-Newsletters
  • Bestselling Books
  • Latest Books
  • Popular Book Series
  • Prospective Authors
  • Web Seminars
  • Exhibits & Sponsorship
  • Conference Reviewers
  • National Conference • Denver 24
  • Leaders Institute 2024
  • National Conference • New Orleans 24
  • Submit a Proposal
  • Latest Resources
  • Professional Learning Units & Courses
  • For Districts
  • Online Course Providers
  • Schools & Districts
  • College Professors & Students
  • The Standards
  • Teachers and Admin
  • eCYBERMISSION
  • Toshiba/NSTA ExploraVision
  • Junior Science & Humanities Symposium
  • Teaching Awards
  • Climate Change
  • Earth & Space Science
  • New Science Teachers
  • Early Childhood
  • Middle School
  • High School
  • Postsecondary
  • Informal Education
  • Journal Articles
  • Lesson Plans
  • e-newsletters
  • Science & Children
  • Science Scope
  • The Science Teacher
  • Journal of College Sci. Teaching
  • Connected Science Learning
  • NSTA Reports
  • Next-Gen Navigator
  • Science Update
  • Teacher Tip Tuesday
  • Trans. Sci. Learning

MyNSTA Community

  • My Collections

Case Studies: Computer Science

Narrow your search.

  • Bioinformatics

  

All Computer Science Cases

A Bioinformatic Investigation of a Mysterious Meningoencephalitis

By Sari Matar, Dyan Anore, Basma Galal, Shawn Xiong

Is p53 a Smoking Gun?

By Michèle I. Shuster, Joann Mudge, Meghan Hill, Katelynn James, Gabriella A. DeFrancesco, Maria P. Chadiarakou, Anitha Sundararajan

Computers and Micronutrients

By Winyoo Chowanadisai, Bryant H. Keirns

The Stakeholders of Gorongosa National Park

By Andrea M.-K. Bierema, Sara D. Miller, Claudia E. Vergara

Seq’ ing the Cure: Standard Edition

By Heather B. Miller, Sabrina D. Robertson, Melissa C. Srougi

New Tricks for Old Drugs

By Carlos C. Goller, Stefanie H. Chen, Melissa C. Srougi

The Colors that Do Magic

By Ghizlane Bendriss, Ali Chaari, Kuei-Chiu Chen

Retinoblastoma

By Daniel B. Stovall

By Stefanie H. Chen, Carlos C. Goller, Melissa C. Srougi

Fatally Flawed?

By Amy C. Groth

Sphero makes remarkably cool, programmable robots and STEAM-based educational tools that transform the way kids learn, create and invent through coding, science, music, and the arts.

  • Lessons & Resources
  • Schedule a Virtual Demo
  • Professional Development
  • Standards Alignment
  • Funding & Grants
  • Case Studies & Whitepapers
  • App Downloads
  • Sphero BOLT
  • Sphero indi
  • Sphero RVR+
  • Sphero littleBits
  • Sphero Blueprint
  • Sphero Mini
  • Computer Science Foundations (Sphero BOLT)
  • Early Elementary
  • Upper Elementary
  • Middle School
  • Past Year's Challenges
  • Compete in Person

Sphero logo

Enter your e-mail and password:

New customer? Create your account

Lost password? Recover password

Recover password

Enter your email:

Remembered your password? Back to login

Create my account

Please fill in the information below:

Already have an account? Login here

Case Studies & Whitepapers

Our library of insightful case studies and whitepapers is designed with educators in mind. We regularly publish original content to inform and educate on the importance and power of STEM education. Download the topics that interest you most and be the first to learn when new articles are published by signing up for our newsletter!

Happy kids holding coding robots.

Sphero & littleBits K-12 STEAM & Computer Science Case Studies

Parents with kids using iphones to code sphero robots.

Harlem Children's Zone Case Study

Kids in the UK learning STEM and coding.

Improved Student Ambition, Resilience and Engagement

Students doing a score keeping activity with littleBits.

Making littleBits a Part of the Curriculum

Sphero & littlebits steam & computer science whitepapers.

Download our whitepapers to learn more about timely education topics, like why professional development is so critical for teachers or how to get STEM funding for your school or district.

Sphero collage of STEM activites.

Integrating Play-based Learning in a Hybrid Classroom to Accelerate STEM Education

While we’re all adjusting to working and learning in new ways this year, as an educator you may wonder why it's important to integrate play-based learning in a hybrid classroom or blended classroom. This whitepaper will dive deeper into what play-based learning is, the main benefits of play-based learning, and feature examples of PBL activities and lessons that yield the best outcome for students no matter where they are learning right now.

Happy teacher using littleBits in the classroom.

11 Reasons Why Professional Development is Critical for Teachers

In this whitepaper, we’ll look at professional development for teachers, why it matters, how it benefits not only teachers but their students, and how to discern you’re getting the best educational opportunities for your efforts.

computer science case study

7 Misconceptions About Introducing Computer Science to your Classroom

In this whitepaper, we'll dispel some of the most common myths. The goal is to give you a better understanding of why, perhaps now more than ever before, it's not only a good idea, but critical to teach your students the basics of computer science in and beyond in the classroom. (Hint: Their futures may depend on it.)

Two young boys coding Sphero BOLT robot for kids.

Guide to STEM Funding

When it comes to incorporating STEAM into the classroom, support can come from a surprising number of sources -- and sometimes, all you have to do is ask. That’s why we’ve compiled a list of resources to help drive innovative tech adoption.

Full STEAM Ahead!

Ready to get started with Sphero, littleBits or teaching STEAM and computer science in general? Our team of Education Experts is happy to answer any questions you may have.

myCBSEguide

  • Computer Science
  • Class 12 Computer Science...

Class 12 Computer Science Case Study Questions

Table of Contents

myCBSEguide App

Download the app to get CBSE Sample Papers 2023-24, NCERT Solutions (Revised), Most Important Questions, Previous Year Question Bank, Mock Tests, and Detailed Notes.

You’ve come to the right site if you’re looking for diverse Class 12 Computer Science case study questions. We’ve put together a collection of Class 12 Computer Science case study questions for you on the myCBSEguide app and student dashboard .

As computer science becomes an increasingly popular field of study, more and more students are looking for resources to help them prepare for their exams. myCBSEguide is the only app that provides students with a variety of class 12 computer science case study questions. With over 1,000 questions to choose from, students can get the practice they need to ace their exams.

Significance of Class 12 Computer Science

Why is computer science so important? In a word, it’s because computers are everywhere. They are an integral part of our lives, and they are only going to become more so in the years to come. As such, it is essential that we understand how they work, and how to use them effectively.

Fascinating Subject

Computer science is a fascinating subject and one that can lead to a rewarding career in a variety of industries. So, if you’re considering CBSE class 12, be sure to give computer science a try.

Rapidly Growing Field

Computer science is the study of computational systems, their principles and their applications. It is a rapidly growing field that is constantly evolving, and as such, it is an essential part of any well-rounded education.

Critical Thinking and Problem-solving Skills

In CBSE class 12, computer science provides students with a strong foundation on which to build their future studies and careers. It equips them with the critical thinking and problem-solving skills they need to succeed in an increasingly digital world. Additionally, computer science is a great way to prepare for further study in fields such as engineering, business, and medicine.

Class 12 Computer Science

  • Familiarize with the concepts of functions
  • Become familiar with the creation and use of Python libraries.
  • Become familiar with file management and using the file handling concept.
  • Gain a basic understanding of the concept of efficiency in algorithms and computing.
  • Capability to employ fundamental data structures such as stacks.
  • Learn the fundamentals of computer networks, including the network stack, basic network hardware, basic protocols, and fundamental tools.
  • Learn SQL aggregation functions by connecting a Python programme to a SQL database.

Case Study Questions in Class 12 Computer Science

There are several reasons why case study questions are included in class 12 computer science.

  • First, class 12 computer science case study questions provide real-world examples of how computer science concepts can be applied in solving real-world problems.
  • Second, they help students develop critical thinking and problem-solving skills. Third, they expose students to different computer science tools and techniques.
  • Finally, case study questions help students understand the importance of collaboration and teamwork in computer science.

Class 12 Computer Science Case Study Questions Examples

The Central Board of Secondary Education (CBSE) has included case study questions in the class 12 computer science paper pattern. This move is in line with the board’s focus on practical and application-based learning. This move by the CBSE will help Class 12 Computer Science students to develop their analytical and problem-solving skills. It will also promote application-based learning, which is essential for Class 12 Computer Science students who want to pursue a career in computer science.

There are many apps out there that provide students with questions for their Class 12 computer science case study questions, but myCBSEguide is the only one that provides a variety of Class 12 Computer Science case study questions. Whether you’re a beginner or an expert, myCBSEguide has the perfect questions for you to practice with. With myCBSEguide, you can be sure that you’re getting the best possible preparation for your Class 12 computer science case studies. Here are a few examples of Class 12 computer science case study questions.

12 Computer Science case study question 1

Be Happy Corporation has set up its new centre at Noida, Uttar Pradesh for its office and web-based activities. It has 4 blocks of buildings.

The distance between the various blocks is as follows:

Numbers of computers in each block

(a) Suggest and draw the cable layout to efficiently connect various blocks of buildings within the Noida centre for connecting the digital devices.

(b) Suggest the placement of the following device with justification

(i) Repeater

(ii)Hub/Switch

Ans: Repeater: between C and D as the distance between them is 100 mts

Hub/ Switch : in each block as they help to share data packets within the devices of the network in each block

(c) Which kind of network (PAN/LAN/WAN) will be formed if the Noida office is connected to its head office in Mumbai?

(d) Which fast and very effective wireless transmission medium should preferably be used to connect the head office at Mumbai with the centre at Noida?

Ans: Satellite

12 Computer Science case study question 2

Rohit, a student of class 12th, is learning CSV File Module in Python. During examination, he has been assigned an incomplete python code (shown below) to create a CSV File ‘Student.csv’ (content shown below). Help him in completing the code which creates the desired CSV File.

1,AKSHAY,XII,A

2,ABHISHEK,XII,A

3,ARVIND,XII,A

4,RAVI,XII,A

5,ASHISH,XII,A

Incomplete Code

import_____ #Statement-1

fh = open(_____, _____, newline=”) #Statement-2

stuwriter = csv._____ #Statement-3

data.append(header)

for i in range(5):

roll_no = int(input(“Enter Roll Number : “))

name = input(“Enter Name : “)

Class = input(“Enter Class : “)

section = input(“Enter Section : “)

rec = [_____] #Statement-4

data.append(rec)

stuwriter. _____ (data) #Statement-5

  • Identify the suitable code for blank space in line marked as Statement-1.
  • a) csv file

Correct Answer : c) csv

  • Identify the missing code for blank space in line marked as Statement-2?
  • a) “School.csv”,”w”
  • b) “Student.csv”,”w”
  • c) “Student.csv”,”r”
  • d) “School.csv”,”r”

Correct Answer : b) “Student.csv”,”w”

iii. Choose the function name (with argument) that should be used in the blank

space of line marked as Statement-3

  • a) reader(fh)
  • b) reader(MyFile)
  • c) writer(fh)
  • d) writer(MyFile)

Correct Answer : c) writer(fh)

  • Identify the suitable code for blank space in line marked as Statement-4.
  • a) ‘ROLL_NO’, ‘NAME’, ‘CLASS’, ‘SECTION’
  • b) ROLL_NO, NAME, CLASS, SECTION
  • c) ‘roll_no’,’name’,’Class’,’section’
  • d) roll_no,name,Class,sectionc) co.connect()

Correct Answer : d) roll_no,name,Class,section

  • Choose the function name that should be used in the blank space of line marked

as Statement-5 to create the desired CSV File?

  • c) writerows()
  • d) writerow()

Correct Answer : c) writerows()

12 Computer Science case study question 3

Krrishnav is looking for his dream job but has some restrictions. He loves Delhi and would take a job there if he is paid over Rs.40,000 a month. He hates Chennai and demands at least Rs. 1,00,000 to work there. In any another location he is willing to work for Rs. 60,000 a month. The following code shows his basic strategy for evaluating a job offer.

pay= _________

location= _________

if location == “Mumbai”:

print (“I’ll take it!”) #Statement 1

elif location == “Chennai”:

if pay < 100000:

print (“No way”) #Statement 2

print(“I am willing!”) #Statement 3

elif location == “Delhi” and pay > 40000:

print(“I am happy to join”) #Statement 4

elif pay > 60000:

print(“I accept the offer”) #Statement 5

print(“No thanks, I can find something

better”)#Statement 6

On the basis of the above code, choose the right statement which will be executed when different inputs for pay and location are given.

  • Input: location = “Chennai”, pay = 50000
  • Statement 1
  • Statement 2
  • Statement 3
  • Statement 4

Correct Answer : ii. Statement 2

  • Input: location = “Surat” ,pay = 50000
  • Statement 5
  • Statement 6

Correct Answer: d. Statement 6

iii. Input- location = “Any Other City”, pay = 1

a Statement 1

  • Input location = “Delhi”, pay = 500000

Correct Answer: c. Statement 4

  • Input- location = “Lucknow”, pay = 65000

iii. Statement 4

Correct Answer: d. Statement 5

Class 12 computer science case study examples provided above will help you to gain a better understanding. By working through the variety of Class 12 computer science case study examples, you will be able to see how the various concepts and techniques are applied in practice. This will give you a much better grasp of the material, and will enable you to apply the concepts to new problems.

myCBSEguide: A step towards success

myCBSEguide app is a one-stop solution for all your CBSE-related needs. It provides you with access to a wide range of study material, including sample papers, previous year papers, case study questions and mock tests. With the myCBSEguide app, you can also get personalized help and advice from our team of experts. So, what are you waiting for? Download the myCBSEguide app today and take a step towards success.

Test Generator

Create question paper PDF and online tests with your own name & logo in minutes.

Question Bank, Mock Tests, Exam Papers, NCERT Solutions, Sample Papers, Notes

Related Posts

  • Competency Based Learning in CBSE Schools
  • Class 11 Physical Education Case Study Questions
  • Class 11 Sociology Case Study Questions
  • Class 12 Applied Mathematics Case Study Questions
  • Class 11 Applied Mathematics Case Study Questions
  • Class 11 Mathematics Case Study Questions
  • Class 11 Biology Case Study Questions
  • Class 12 Physical Education Case Study Questions

2 thoughts on “Class 12 Computer Science Case Study Questions”

it is not good cause the questions are very wonted , yet easy to solve for a gay name Aditya kumari who resides in Numaligarh in Assam

Read More Computer Fundamental MCQ Questions

Leave a Comment

Save my name, email, and website in this browser for the next time I comment.

  • Biomedical Engineering
  • Chemical & Biomolecular Engineering
  • Civil and Environmental Engineering
  • Computer & Data Sciences
  • Electrical, Computer and Systems Engineering
  • Macromolecular Science & Engineering
  • Materials Science & Engineering
  • Mechanical & Aerospace Engineering
  • Intranet Login

Search form

Case school of engineering.

  • Global Opportunities
  • Outside The Classroom

Computer and Data Sciences

Helpful Links

Financial Aid

Register for Classes 

Visit Campus

Academic Calendar

Student Resources

Housing and Dining

International Student Resources

Graduation Application

Enrollment Data

Looking to expand your options with a minor in Computer Science, or AI? Explore options and minor requirements in the university's General Bulletin.

Student Opportunities

From research to student groups to career development, explore all the resources available to you.

View Student Opportunities >>

Gain real experience with a full-time, paid co-op

BS in Computer Science

computer science case study

In our undergraduate program in Computer Science, we’re looking for problem-solvers who relish the challenge of using computers to find solutions to difficult issues. 

No prior experience with programming?  No problem!  

Our program welcomes all, and you will not be alone. A large number of our majors come to Case Western Reserve having never written a computer program.

We have the educational expertise to get you up to speed, quickly.

Our Bachelor of Science in Computer Science is an ABET accredited degree program designed for students who want a deep dive into computer science. In this program, you will master the fundamentals of computer science, explore the breadth of computing, and choose one of six areas of specialization for further in-depth study: 

  • Algorithms and Theory
  • Artificial Intelligence
  • Bioinformatics
  • Computer Systems, Networks and Security
  • Databases and Data Mining
  • Software Engineering

Not sure which computer science degree is right for you?

The Bachelor of Science and the Bachelor of Arts programs provide the same foundational courses in computer science. The difference is in the elective courses you choose.  

Students pursuing the Bachelor of Science degree take a majority of their courses in computer science, mathematics, engineering and the natural sciences. Students pursuing the Bachelor of Arts degree have fewer required computer science courses and at the same time, many open electives, allowing students to easily explore other interests in fields outside of computer science or engineering.

Still not sure? At Case Western Reserve University, you can change your program at any time.

Both our degree programs provide a lot of flexibility so wherever you are on the spectrum between wanting to focus on computer science versus wanting a broad liberal arts education, our faculty advisors will help you craft the education that fits you.  

Computer Undergrad Apply

Learn more and apply now.

Ready to start engineering your future at Case Western Reserve? Learn more about how to apply.

Computer Undergrad Contact

Find your geographic-specific admissions counselor here , or contact [email protected] .

Computer Undergrad Faculty

Meet faculty.

Meet the faculty members who will be your teachers and mentors.

Being a part of the Case School of Engineering means challenging your limits inside and outside of the classroom, and getting leading-edge education and experience. For example: Our “Introduction to Connected Devices” course, jointly offered to Case Western Reserve and Cleveland State University students as part of the partnership between the two universities via the IoT Collaborative , gives students the chance to cover the full spectrum of work of a multidisciplinary team at a real-world software firm. 

Looking to have your work published? You can easily team up with other departments and industry professionals in our research facilities to create work that can be published, showcased or presented at conferences. 

Have an idea you aren’t sure how to execute—or don’t have a clue where to begin? That’s where we thrive. 

You’ll study and learn from peers and professors who can guide you toward solutions—and support you no matter what.

Get Hands-On Experience

Want to make the most of your time as a CWRU student? Be proactive and take part in the programs, volunteer opportunities, and competitions Case Western Reserve has to offer. 

Our Cooperative Education Program allows you to pursue a unique paid experience relevant to your course of study. Most of our students participate in summer internships and/or research experiences. 

You can also get involved in Hackathon , or teach local middle school and high school girls programming through the Girls Who Code . 

The Bachelor of Science degree program in Computer Science is accredited by the Computing Accreditation Commission of ABET . 

Explore degree requirements, courses and more in the university’s General Bulletin

Degree FAQs

Visit the Office of Undergraduate Admissions to apply and learn more about admissions requirements.

When you’re ready, the Office of Undergraduate Studies will guide you through the process .

Visit the university's General Bulletin for specific course requirements.

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Never just tech

Creating value beyond the hype

Let’s deliver on the promise of technology from strategy to scale.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

  • Share full article

Advertisement

Supported by

Cyberattack Paralyzes the Largest U.S. Health Care Payment System

The hacking shut down the nation’s biggest health care payment system, causing financial chaos that affected a broad spectrum ranging from large hospitals to single-doctor practices.

A portrait of Molly Fulton, who sits in the waiting room of one of the urgent care centers she runs. She wears a blazer over a black blouse with her hands folded in her lap.

By Reed Abelson and Julie Creswell

An urgent care chain in Ohio may be forced to stop paying rent and other bills to cover salaries. In Florida, a cancer center is racing to find money for chemotherapy drugs to avoid delaying critical treatments for its patients. And in Pennsylvania, a primary care doctor is slashing expenses and pooling all of her cash — including her personal bank stash — in the hopes of staying afloat for the next two months.

Listen to this article with reporter commentary

Open this article in the New York Times Audio app on iOS.

These are just a few examples of the severe cash squeeze facing medical care providers — from large hospital networks to the smallest of clinics — in the aftermath of a cyberattack two weeks ago that paralyzed the largest U.S. billing and payment system in the country. The attack forced the shutdown of parts of the electronic system operated by Change Healthcare, a sizable unit of UnitedHealth Group, leaving hundreds, if not thousands, of providers without the ability to obtain insurance approval for services ranging from a drug prescription to a mastectomy — or to be paid for those services.

In recent days, the chaotic nature of this sprawling breakdown in daily, often invisible transactions led top lawmakers, powerful hospital industry executives and patient groups to pressure the U.S. government for relief. On Tuesday, the Health and Human Services Department announced that it would take steps to try to alleviate the financial pressures on some of those affected: Hospitals and doctors who receive Medicare reimbursements would mainly benefit from the new measures.

U.S. health officials said they would allow providers to apply to Medicare for accelerated payments, similar to the advanced funding made available during the pandemic, to tide them over. They also urged health insurers to waive or relax the much-criticized rules imposing prior authorization that have become impediments to receiving care. And they recommended that insurers offering private Medicare plans also supply advanced funding.

H.H.S. said it was trying to coordinate efforts to avoid disruptions, but it remained unclear whether these initial government efforts would bridge the gaps left by the still-offline mega-operations of Change Healthcare, which acts as a digital clearinghouse linking doctors, hospitals and pharmacies to insurers. It handles as many as one of every three patient records in the country.

The hospital industry was critical of the response, describing the measures as inadequate.

Beyond the news of the damage caused by another health care cyberattack, the shutdown of parts of Change Healthcare cast renewed attention on the consolidation of medical companies, doctors’ groups and other entities under UnitedHealth Group. The acquisition of Change by United in a $13 billion deal in 2022 was initially challenged by federal prosecutors but went through after the government lost its case.

So far, United has not provided any timetable for reconnecting this critical network. “Patient care is our top priority, and we have multiple workarounds to ensure people have access to the medications and the care they need,” United said in an update on its website .

But on March 1, a bitcoin address connected to the alleged hackers, a group known as AlphV or BlackCat, received a $22 million transaction that some security firms say was probably a ransom payment made by United to the group, according to a news article in Wired . United declined to comment, as did the security firm that initially spotted the payment.

Still, the prolonged effects of the attack have once again exposed the vast interconnected webs of electronic health information and the vulnerability of patient data. Change handles some 15 billion transactions a year.

The shutdown of some of Change’s operations has severed its digital role connecting providers with insurers in submitting bills and receiving payments. That has delayed tens of millions of dollars in insurance payments to providers. Pharmacies were initially unable to fill many patients’ medications because they could not verify their insurance, and providers have amassed large sums of unpaid claims in the two weeks since the cyberattack occurred.

“It absolutely highlights the fragility of our health care system,” said Ryan S. Higgins, a lawyer for McDermott Will & Emery who advises health care organizations on cybersecurity. The same entity that was said to be responsible for the cyberattack on Colonial Pipeline, a pipeline from Texas to New York that carried 45 percent of the East Coast’s fuel supplies, in 2021 is thought to be behind the Change assault. “They have historically targeted critical infrastructure,” he said.

In the initial days after the attack on Feb. 21, pharmacies were the first to struggle with filling prescriptions when they could not verify a person’s insurance coverage. In some cases, patients could not get medicine or vaccinations unless they paid in cash. But they have apparently resolved these snags by turning to other companies or developing workarounds.

“Almost two weeks in now, the operational crisis is done and is pretty much over,” said Patrick Berryman, a senior vice president for the National Community Pharmacists Association.

But with the shutdown growing longer, doctors, hospitals and other providers are wrestling with paying expenses because the steady revenue streams from private insurers, Medicare and Medicaid are simply not flowing in.

Arlington Urgent Care, a chain of five urgent care centers around Columbus, Ohio, has about $650,000 in unpaid insurance reimbursements. Worried about cash, the chain’s owners are weighing how to pay bills — including rent and other expenses. They’ve taken lines of credit from banks and used their personal savings to set aside enough money to pay employees for about two months, said Molly Fulton, the chief operating officer.

“This is worse than when Covid hit because even though we didn’t get paid for a while then either, at least we knew there was going to be a fix,” Ms. Fulton said. “Here, there is just no end in sight. I have no idea when Change is going to come back up.”

The hospital industry has labeled the infiltration of Change “the most significant cyberattack on the U.S. health care system in American history,” and urged the federal government and United to provide emergency funding. The American Hospital Association, a trade group, has been sharply critical of United’s efforts so far and the latest initiative that offered a loan program.

“It falls far short of plugging the gaping holes in funding,” Richard J. Pollack, the trade group’s president, said on Monday in a letter to Dirk McMahon, the president of United.

“We need real solutions — not programs that sound good when they are announced but are fundamentally inadequate when you read the fine print,” Mr. Pollack said.

The loan program has not been well received out in the country.

Diana Holmes, a therapist in Attleboro, Mass., received an offer from Optum to lend her $20 a week when she says she has been unable to submit roughly $4,000 in claims for her work since Feb. 21. “It’s not like we have reserves,” she said.

She says there has been virtually no communication from Change or the main insurer for her patients, Blue Cross of Massachusetts. “It’s just been maddening,” she said. She has been forced to find a new payment clearinghouse with an upfront fee and a year’s contract. “You’ve had to pivot quickly with no information,” she said.

Blue Cross said it was working with providers to find different workarounds.

Florida Cancer Specialists and Research Institute in Gainesville resorted to new contracts with two competing clearinghouses because it spends $300 million a month on chemotherapy and other drugs for patients whose treatments cannot be delayed.

“We don’t have that sort of money sitting around in a bank,” said Dr. Lucio Gordan, the institute’s president. “We’re not sure how we’re going to retrieve or collect the double expenses we’re going to have by having multiple clearinghouses.”

Dr. Christine Meyer, who owns and operates a primary care practice with 20 clinicians in Exton, Pa., west of Philadelphia, has piled “hundreds and hundreds” of pages of Medicare claims in a FedEx box and sent them to the agency. Dr. Meyer said she was weighing how to conserve cash by cutting expenses, such as possibly reducing the supply of vaccines the clinic has on hand. She said if she pulled together all of her cash and her line of credit, her practice could survive for about two and a half months.

Through Optum’s temporary funding assistance program, Dr. Meyer said she received a loan of $4,000, compared with the roughly half-million dollars she typically submits through Change. “That is less than 1 percent of my monthly claims and, adding insult to injury, the notice came with this big red font that said, you have to pay all of this back when this is resolved,” Dr. Meyer said. “It is all a joke.”

The hospital industry has been pushing Medicare officials and lawmakers to address the situation by freeing up cash to hospitals. Senator Chuck Schumer, Democrat of New York and the chamber’s majority leader, wrote a letter on Friday, urging federal health officials to make accelerated payments available. “The longer this disruption persists, the more difficult it will be for hospitals to continue to provide comprehensive health care services to patients,” he said.

In a statement, Senator Schumer said he was pleased by the H.H.S. announcement because it “will get cash flowing to providers as our health care system continues to reel from this cyberattack.” He added, “The work cannot stop until all affected providers have sufficient financial stability to weather this storm and continue serving their patients.”

Audio produced by Jack D’Isidoro .

Reed Abelson covers the business of health care, focusing on how financial incentives are affecting the delivery of care, from the costs to consumers to the profits to providers. More about Reed Abelson

Julie Creswell is a business reporter covering the food industry for The TImes, writing about all aspects of food, including farming, food inflation, supply-chain disruptions and climate change. More about Julie Creswell

IMAGES

  1. (PDF) Computer Science and Interdisciplinarity: A Case Study on an

    computer science case study

  2. (PDF) High School Students Learning University Level Computer Science

    computer science case study

  3. 48+ SAMPLE Case Study in PDF

    computer science case study

  4. Ways to Study Computer Science

    computer science case study

  5. How to Customize a Case Study Infographic With Animated Data

    computer science case study

  6. SCHOOL OF COMPUTER SCIENCE AND MATHEMATICS CAREERS AND EMPLOYABILITY

    computer science case study

VIDEO

  1. Computer Science 1 : Lecture 3

  2. HPSC PGT COMPUTER SCIENCE|| CASE & EXAM RELATED LATEST UPDATE || NS CLASSES

  3. WITS Computer Science |FINAL YEAR STUDENTS

  4. Computer Science and Engineering Department @ S.P.I.T

  5. What is Computer Science

  6. any computer science subjects subscribe and follow

COMMENTS

  1. PDF Computer science Case study: Genetic algorithms

    the subject of this case study. Genetic algorithms Genetic algorithms mimic the process of natural selection in an attempt to evolve solutions to otherwise computationally intractable problems. Implementation details vary considerably, but a standard genetic algorithm includes the following steps: Initialize While true Evaluate

  2. PDF Computer science Case study: Rescue robots

    Computer science Case study: Rescue robots Instructions to candidates y Case study booklet required for higher level paper 3. ... Computer vision refers to technologies that allow a device to "see", that is, sense the environment around it, including both static and dynamic objects.

  3. CiSE Case Studies in Translational Computer Science

    Call for Department Articles . CiSE's newest department explores how findings in fundamental research in computer, computational, and data science translate to technologies, solutions, or practice for the benefit of science, engineering, and society.Specifically, each department article will highlight impactful translational research examples in which research has successfully moved from the ...

  4. 2023 case study

    The case study is the third paper. Every year, the case study discusses a different topic. Students must become very very familiar with the case study. The IB recommends spending about a year studying this guide. This page will help you organize and understand the 2023 case study. Here are some external resources:

  5. A Case Study on Computer Programs

    Suggested Citation:"12 A Case Study on Computer Programs."National Research Council. 1993. Global Dimensions of Intellectual Property Rights in Science and Technology.Washington, DC: The National Academies Press. doi: 10.17226/2054.

  6. Fostering ethical thinking in computing

    "The constructive review process made our case study significantly stronger." Farid's case, "The Dangers of Risk Prediction in the Criminal Justice System," which he penned with Julia Dressel, recently a student of computer science at Dartmouth College, is one of the four commissioned pieces featured in the inaugural issue.

  7. (PDF) A real-world case study in information technology for

    Abstract. Real-world case studies are important to complement the academic skills and knowledge acquired by computer science students. In this paper we relate our experiences with a course ...

  8. Case Studies in Social and Ethical Responsibilities of Computing

    The MIT Case Studies in Social and Ethical Responsibilities of Computing (SERC) aims to advance new efforts within and beyond the Schwarzman College of Computing. The specially commissioned and peer-reviewed cases are brief and intended to be effective for undergraduate instruction across a range of classes and fields of study, and may also be ...

  9. PDF Automated Content Analysis: A Case Study of Computer Science Student

    2Department of Computer Science, University of Wolverhampton. [email protected] Abstract. Technology is transforming Higher Education learning and teaching. This paper reports on a project to examine how and why automated content analysis could be used to assess precis´ writing by university students.

  10. Paper 3 (HL only)

    External Assessment — Paper 3 # Paper 3 asks a number of questions related to a pre-released case study. Here is the case study for use in May and November 2024 Case studies from other years. The maximum number of marks you can get for Paper 3 is 30. Your Paper 3 score translates into 20% of your final HL grade, see grade boundaries. Grade boundaries # Computer science course has a variety ...

  11. Computer science in DP

    The IB computer science course is a rigorous and practical problem-solving discipline. Features and benefits of the curriculum and assessment of are as follows: Two course levels are offered; standard level (SL) and higher level (HL). Computer science candidates are not limited by a defined study level so can opt for this course in the same way ...

  12. AP Computer Science A

    Case studies were used in AP Computer Science curriculum starting in 1994. Large Integer case study (1994-1999) The Large Integer case study was in use prior to 2000. ... GridWorld is a computer program case study written in Java that was used with the AP Computer Science program from 2008 to 2014.

  13. [2403.10446] Enhancing LLM Factual Accuracy with RAG to Counter

    Computer Science > Computation and Language. arXiv:2403.10446 (cs) [Submitted on 15 Mar 2024] Title: Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases. Authors: Jiarui Li, Ye Yuan, Zehua Zhang.

  14. PDF Computer science Case study: your (autonomous) taxi awaits you

    Computer science Case study: your (autonomous) taxi awaits you 10 pages For use in May 2018 and November 2018 Instructions to candidates y Case study booklet required for higher level paper 3. International accalaureate Organization 20 17 M N18/4/COMSC/ HP3/ENG/TZ0/X/CS. 5 Introduction

  15. Case Studies

    This small program offers support and personal attention while students study complex computer science concepts. It also provides substantial opportunity for hands-on experience with computer programming on a variety of computer systems. Each student's course of study is tailored to individual needs and includes independent study options.

  16. Case Studies Computer Science

    NCCSTS Case Studies; Case Studies Computer Science Narrow your search. Bioinformatics All Computer Science Cases ... By Sari Matar, Dyan Anore, Basma Galal, Shawn Xiong. Case Study. Is p53 a Smoking Gun? By Michèle I. Shuster, Joann Mudge, Meghan Hill, Katelynn James, Gabriella A. DeFrancesco, Maria P. Chadiarakou, Anitha Sundararajan.

  17. Ib Computer Science

    CASE STUDY SAMPLE PAPERS AND SAMPLE QUESTIONS. CASE STUDY SAMPLE ANSWERS. A great place for IB Computer Science teachers and students, offering in-depth learning materials for most topics, engaging sample questions, and valuable resources to enhance your understanding of the curriculum and excel in your studies.

  18. STEAM & Computer Science Case Study Examples

    Case Studies & Whitepapers. Our library of insightful case studies and whitepapers is designed with educators in mind. We regularly publish original content to inform and educate on the importance and power of STEM education. Download the topics that interest you most and be the first to learn when new articles are published by signing up for ...

  19. Code of Ethics Case Studies

    Case Studies. The ACM Code of Ethics and Professional Practice ("the Code") is meant to inform practice and education. It is useful as the conscience of the profession, but also for individual decision-making. As prescribed by the Preamble of the Code, computing professionals should approach the dilemma with a holistic reading of the ...

  20. Class 12 Computer Science Case Study Questions

    12 Computer Science case study question 2. Rohit, a student of class 12th, is learning CSV File Module in Python. During examination, he has been assigned an incomplete python code (shown below) to create a CSV File 'Student.csv' (content shown below). Help him in completing the code which creates the desired CSV File.

  21. 2022 case study

    Higher-level students must write 3 papers. The case study is the third paper. Every year, the case study discusses a different topic. Students must become very very familiar with the case study. The IB recommends spending about a year studying this guide. This page will help you organize and understand the 2022 case study .

  22. Computer Science

    Our Bachelor of Science in Computer Science is an ABET accredited degree program designed for students who want a deep dive into computer science. In this program, you will master the fundamentals of computer science, explore the breadth of computing, and choose one of six areas of specialization for further in-depth study: Algorithms and Theory.

  23. 2020 case study

    Higher-level students must write 3 papers. The case study is the third paper. Every year, the case study discusses a different topic. Students must become very very familiar with the case study. The IB recommends spending about a year studying this guide. This page will help you organize and understand the 2020 case study .

  24. A generative AI reset: Rewiring to turn potential into value in 2024

    It's time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI's enormous potential value is harder than expected.. With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI ...

  25. Cyberattack Paralyzes the Largest US Health Care Payment System

    The hacking shut down the nation's biggest health care payment system, causing financial chaos that affected a broad spectrum ranging from large hospitals to single-doctor practices.

  26. PDF Computer science Case study: May I recommend the following?

    Computer science Case study: May I recommend the following? Instructions to candidates y Case study booklet required for higher level paper 3.