Does a seed aspire to root itself into the ground to catch water, to reach into the sky to catch the sun? Or does the seed’s programming simply make it sink more into the mud until it is suddenly drinking and stretch into the air until it is suddenly breathing?
All Notes
-
Creating a DIKUW chart
As described in Data, Information, Knowledge, Understanding, and Wisdom, we make data from phenomena we observe in the world. When we add meaning to data, we gain information, which helps us make decisions and take actions. When we put information into practice and learn from the process, we gain knowledge. When we combine knowledge with other knowledge — especially knowledge about knowledge, such as knowledge about whether the quality of knowledge we have is good, or knowledge about how to apply knowledge — we gain understanding. Each of these levels help us with efficiency (Ackoff, 1989): we use them to attain a specific goal, and as we gain data, information, knowledge, and understanding about that goal, we get better at attaining it. That is, we learn to attain that goal more predictably and while consuming fewer resources each time. However, it is no use to be efficient if we are efficient about doing the wrong thing. To be effective, according to Ackoff (1989, is to do the right thing. Effectiveness is about attaining the most valuable goals. Data, information, knowledge, and understanding can only help us be more efficient in the attainment of goals — but to judge whether those goals are effective, we need wisdom. Design is the art and science of designating importance: of finding and framing the right problems, determining the criteria for solving those problems, and solving them by finding solutions that best fit those criteria (Simon, 1995). Wisdom is therefore produced by applying design to understandings. We do that by asking (and seeking the answers to) questions like “What problems are most important?” “Which gaps in our knowledge are critical?” “What do we not understand or know?” “Where are we wasting efficiency on the wrong goals?” Questions like these form the roots and foundations of the theories we use to explain and predict the world and prescribe the most valuable goals and how to attain them efficiently.
Okay: so what? How do we use these ideas about data, information, knowledge, understanding, and wisdom (DIKUW) to better ourselves and our organizations? A DIKUW chart is a simple way to put these ideas into practice. DIKUW charts combine Sanders’s (2015) sensemaking framework with Basadur’s challenge mapping (Basadur et al., 2000). DIKUW charts connect our key, critical questions with what we know and what we don’t know — and how we know, or how we can learn, those things. The goal of a DIKUW chart is to model our wisdom, understanding, knowledge, information, data, and the phenomena that we derive them from. Creating a DIKUW chart is therefore a way of practicing wisdom: of reflecting on what matters and what doesn’t, identifying the important gaps, and charting a path to resolving those gaps.
Figure 1 provides a demonstrative sample of a DIKUW chart. This sample DIKUW was developed to inform a research project on data crowdsourcing. At the top of the chart are the theories we have (theory of classification, theory on conceptual modelling) and the theories we are trying to build (“how to design for crowdsourcing contributions in data crowdsourcing.”) It then breaks those theories down into components, all the way down to the phenomena that make up the problems and solutions we are concerned with.
A sample DIKUW chart, adapted from Murphy and Parsons, 2020. Note that the important thing is not the exact segmentation of each level or the correct classification of each element into a given layer, but of outlining a clear path from the questions we need to answer to the phenomena that provide the answers. The process of developing a DIKUW chart is simple. However, it is also iterative. As you go through this process, work in pencil, or marker and sticky notes. Be willing to revise, move, remove, and add to the chart as it develops.
Begin with a goal. What are you trying to achieve? Remember that while wisdom is about deciding what the best goal is, there is no perfect answer, here. Design is tentative and iterative. Start somewhere — anywhere — and give yourself permission to change later. In figure 1, the goal is designing for [better] crowdsourcing contributions in data crowdsourcing projects. Write that goal at the top of a chart.
Then, ask: what understandings do you need to achieve that goal as efficiently as possible? Write these understandings down as separate elements on the chart and connect them to the goal you’ve identified. In our example, a key issue in crowdsourcing is the variety of contributors that might contribute to a given crowdsourcing project. Different contributors may have different skillsets or expertise or background that influence the qualities of data they are able to share with the project. So, the understanding highlighted in figure 1 is that crowdsourcing projects should account for this variance.
Third, what knowledge can we use to inform that understanding? Connect these pieces of knowledge to the understandings you added previously. In the example above, we know that the interfaces people use to contribute to crowdsourcing projects come from the conceptual model of the project: how the project designers see the world, who they expect contributors to be, and how they expect them to be able to contribute. They design the contribution interface according to that conceptual model, so potential contributors who do not match the conceptual model of the project designers might run into friction when they try to contribute (e.g., they might not speak the same language as the interface, leading to mistakes). However, we don’t know what influences contributor motivation. This is where the DIKUW starts to become useful: by identifying what we don’t know, we learn what we need to learn.
Fourth, what information provides the knowledge we just noted? In the example, this takes the form of the conclusions of scholarly studies — but this need not be the case. Information is simply data with meaning. Any input you can think of that confirms your knowledge is worth adding to the chart. So, add the pieces of information you can think of that contribute to the knowledge you identified before and connect them in the chart. At the same time think about what information would provide the knowledge you are looking for. Remember that the purpose of a DIKUW chart is not only to map what you know, but also to map what you need to figure out. Add these ideas and connect them as well.
Fifth, what data drives that information? Label these on the map and connect them accordingly. In the example, experiments and surveys informed the scholarly studies we cited. There are many other kinds of data that could be useful, however. Reports, dashboards, or metrics in your organization, for instance, provide valuable sources of data that may be transformed into information in service of your developing theory.
Sixth, what are the phenomena in the world that we turn into those data? Consumer behaviour? Products and services? Staff functions and roles? Anything goes, as long as it can be observed and turned into data. Add these phenomena.
As you go through these steps, feel free to re-work the map. Does something you added as understanding actually fit better as knowledge? Move it! Are you thinking of new key questions as you work? Add them! Does something you’ve just added map to multiple parts of the chart? Sketch the relationships out. Did you just think of a new piece of data or information that might be useful, but you’re not sure how yet? Put it down and see what comes of it.
The purpose of the chart is not actually the chart itself, but what you learn from the process. The goal of engaging in a DIKUW charting exercise is to reflect and reflect on your learning or your organization’s learning. The chart is a working model of what you know, how you know it, what you need to know, and how you can figure it out. So, remember that all models are wrong (Box, 1976) — but it is your responsibility to make sure that this model is useful.
# References
Ackoff, R. (1989). From data to wisdom. Journal of Applied Systems Analysis, 16, 3–9. http://www-public.imtbs-tsp.eu/~gibson/Teaching/Teaching-ReadingMaterial/Ackoff89.pdf
Basadur, M., Potworowski, J. A., Pollice, N., & Fedorowicz, J. (2000). Increasing understanding of technology management through challenge mapping. Creativity and Innovation Management, 9(4), 245–258. https://doi.org/10.1111/1467-8691.00198
Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791–799. https://doi.org/10.2307/2286841
Murphy, R. J. A., & Parsons, J. (2020). What the crowd sources: A protocol for a contribution-centred systematic literature review of data crowdsourcing research. AMCIS 2020 Proceedings, 20, 6. https://core.ac.uk/download/pdf/326836069.pdf
Sanders, E. B.-N. (2015). The fabric of design wisdom. Current, 06. https://current.ecuad.ca/the-fabric-of-design-wisdom
Simon, H. A. (1995). Problem forming, problem finding and problem solving in design. Design & Systems, 245–257. http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=34208
-
Design Science and Information Systems
# Design Science and Information Systems
# Introduction
Here, we explore the fundamental concepts and principles of design science, its application in information systems, and how it can be used to develop novel design theories. Design science is the combination of design (a discipline focused on using what we know to engage in effective, efficient, and ethical problem-solving) and science (a discipline focused on advancing what we know by iteratively and empirically solving problems). The interdiscipline of design science plays a crucial role in the development and implementation of information systems, helping organizations achieve their objectives efficiently and effectively.
# What is design?
When you hear that someone is a designer, what are your assumptions? What do you think they do? A common misconception about design is that design is about aesthetics. “Designer” fashion or furniture is a great example of this. So, often, when people talk about design, they are talking about the look of something. You might assume that a designer is a graphic designer, clothing designer, or product designer, and that their day-to-day work involves choosing colours, styles, layouts, and materials. Steve Jobs famously asserts that these assumptions are completely wrong:
Design is not just what it looks like and feels like. Design is how it works.
- Steve Jobs (1955–2011), co-founder and former CEO of Apple, as quoted by Walker, 2024
Designers don’t just think about what something looks or feels like. Designers decide what a thing does. The word “design” is, in fact, rooted in the Latin “de-signô”: “I mark.” A designer designates what is important about a thing, and what is not. Every person, in fact, engages in design work — whether they know it or not. You decide why you’re reading this reading: what is important about this experience for you? What is not? You decide what your goals are. In turn, you decide what your day should look like to help you achieve those goals. So, you are a designer, I am a designer, and we are all designers.
Of course, the designers developing information systems such as your employer’s Enterprise Resource Planning suite, the latest social media platform, or an e-commerce website must make different kinds of design decisions than the ones we all make every day. Designing information systems at this scale means appreciating that you are not the only stakeholder or user of the systems you’re designing. More crucially, it means understanding that the system you’re designing doesn’t really matter — those users and stakeholders do (Keil & Mark, 1994). Thus, most designers engage in an iterative design process that looks like this:
- Work with users/stakeholders to deeply discover the problems that the design must solve (and who exactly it must solve them for);
- Define the exact parameters, constraints, objectives, and success conditions for solutions for these problems for those users/stakeholders;
- Use feedback from tests to develop possible solutions and improve upon them; and
- Deliver, implement, scale, and sustain the new design.
That design process has variously been visualized as a “double diamond” (figure 1) or the “design squiggle” (figure 2).
The UK Design Council’s “double diamond” of design. The “Double Diamond” of design shows how design often follows two patterns of divergence and converge. From https://www.designcouncil.org.uk/our-resources/the-double-diamond/ The “Design Squiggle,” from https://thedesignsquiggle.com/about. The “design squiggle” famously illustrates the messiness of the design process. Design decisions using this process range from major (“what does this policy do, and for whom?”) to minor (“do we make this function accessible via a menu or buttons?”) The iterative nature of this process is important. No design should ever be implemented without testing it, and using the feedback from that testing to improve the design. Repeatedly moving through this process — iterating — is how that feedback cycle drives progress.
# Design science
Design science is a discipline combining design with the scientific method. As described by Hevner et al. (2004, it is a paradigm that harnesses our instinctive designing ability and formalizes it with scientific rigor. With roots in engineering and the sciences, design science adds structure to the discipline of design. As we will discuss, for instance, design science separates the ideas behind a design (its design principles and theory) from the ways in which we use those ideas (the designed tool itself, usually called “artifacts” or “instantiations.”) Design science further applies empirical methods to design processes and challenges, with the goal of producing reliable, reproducible, and valid designs as outcomes of the design science process.
Design science plays a significant role in Management Information Systems (MIS). It offers a structured method to create innovative IT artifacts — products and processes — that are intended to meet specific organizational or business goals (Gregor & Hevner, 2013). Given the dynamic nature of MIS – marked by rapid technological advancements, diverse user requirements, and shifting business environments – the need for constant design and redesign is paramount. Thus, design science provides an essential framework for developing and refining MIS models, tools, and theories.
# Design principles
This section introduces the fundamental building-block of design science: the design principle. Effectively designing, defining, developing, testing, and applying design principles is fundamental to any successful design. This section will help you understand what design principles are, their significance, how they can be identified across various domains, and the role they play in enhancing system performance.
Design principles can be thought of as the guidelines or rules followed in the creation or modification of a design artifact. They are prescriptive statements that indicate how to do something to achieve a goal (Gregor et al., 2020). Design principles provide guidance to people developing solutions to particular classes of problems, providing detailed behavioral prescriptions that establish how the related artifacts should be constructed or used (Gregor & Jones, 2007). For example, in visual design, principles like balance, contrast, and unity guide aesthetic decisions. For a trivial example, “do not make the date or location of the event small or hard to see” is a very important design principle in the design of an event poster. An example from designing information systems might be “Provide user interfaces that allow the user to precisely control the system.” This design principle would guide a designer away from counterintuitive designs, such as a volume controller on a slide that requires the user to tilt the controller and wait for the slider to fall “down” to the appropriate volume level (figure 3), or a volume controller that does not give the user exact feedback about the level of volume of the device (e.g., Apple’s iOS Control Center volume control, introduced in iOS 7; figure 4).
A volume controller that works by tilting the UI element until the slider has slid to the desired volume level (from https://uxdesign.cc/the-worst-volume-control-ui-in-the-world-60713dc86950) iOS 10’s Control Center showing the volume slider on the right side (from https://uxdesign.cc/the-worst-volume-control-ui-in-the-world-60713dc86950). The identification and definition of design principles provides an important middle ground between understanding a problem and its potential solutions and actually delivering and implementing a solution to the problem. Imagine a company aiming to develop an innovative online learning platform to address the problem of access to quality education in rural areas. In this scenario, the identified problem is the inadequacy of current solutions in delivering high-quality education to the remote regions. There are barriers such as inconsistent internet connectivity, limited resources, diverse learner profiles, and varied learning environments to consider. We might assume the company understands this problem and potential solutions thoroughly. Nonetheless, developing the platform without clearly defining design principles that address the key challenges of the platform’s users would be like trying to travel through uncharted terrain without a map. Just like you might head in the right direction through sheer luck, the company might invent a design that solves all of its users’ problems immediately. More likely, though, the company will get more things wrong than it gets right when it first begins. To navigate this uncharted terrain effectively, designers at the company need to map their understanding of the problem and their users to testable design principles, and then test and use those principles as guidelines to steer the design process. These principles could include:
- Inclusivity: design the platform so that users with limited resources and abilities are not disadvantaged.
- Simplicity: Ensure the platform is easy to use such that any major action taken requires a maximum of four steps after logging in.
- Flexibility: Allow users to follow adaptive learning paths, engaging in content and assessment in the order that best suits their needs and abilities.
- Resource Efficiency: Optimize the technical requirements of using the platform so that it remains performant even when users networks have low Internet bandwidth and their devices have limited processing power and memory.
These principles provide direction and useful constraints for the platform designers, encouraging them towards solutions that will be celebrated by their users while preventing them from implementing ideas that disparage their needs. Design principles provide wayfinding and guardrails for designers on the road between finding a problem and solving it effectively, efficiently, and ethically.
Understanding design principles is a core aspect of leveraging design science, particularly in the development and management of information systems. The itemized, atomic nature of design principles is particularly important in design science. This allows designers to decompose a design into its component principles. This, in turn, allows designers to manipulate only some of these parts at once, facilitating the evaluation of specific aspects of the design while controlling the rest. The itemized, atomic nature of design principles is particularly important in design science. This means that each principle can exist independently and is defined distinctly. Such a modular approach to principles allows for decomposing or breaking down a complex design into its constituent principles, much like dismantling a complex machine into its individual components.
In conclusion, design principles offer crucial prescriptions for the design process. They provide clear, actionable, and theoretically grounded advice that direct designers towards the outcomes of their design goals. Further, they help break down the complexity of designs into testable components, facilitating the design science process.
# Designs and what they make: Differentiating between design theories and design artifacts
At this point, you may be noticing the difference between a design and the objects we make with designs. This represents the essential duality of design science, and an important distinction: we use designs to make things. Designs themselves are fundamentally theories of the best way to do or make something. Designs are therefore fully represented by design theories: design principles and a number of other supporting ideas about those design principles (Gregor & Jones, 2007). Objects are design artifacts (or “instantiations”), the result of applying those design ideas to create or do tangible or intangible things (Hevner et al., 2004).
We can make this idea obvious with a simple example. Consider the following design principles for a chair:
- Use ultralightweight materials to make the chair easy to lift.
- Allow the chair to nest on top of other chairs so that they may be stacked.
- Provide a wide seat and movable arms so that the chair is comfortable for people of different shapes and sizes.
Now, imagine the chair that would result from applying those design principles. Easy, right? Yet, the chair you imagine and the chair that I imagine are undoubtedly different. Each person applying these design principles will instantiate the design in different ways. (Instantiation validity is the degree to which a given artifact stays true to its design; Lukyanenko & Parsons, 2013)
This distinction between design theory and designed object facilitates several things. First, designs are more than just their design principles. A design theory provides the purpose of a design, all of the principles that guide its creation/manifestation/implementation/application, the basis of those principles, tests of the effectiveness of the resulting artifacts, and more. Design theories are therefore amalgamations of knowledge, principles, and other information that guide how to go about creating the artifact to solve a recurrent class of problems (Gregor & Jones, 2007). Second, design artifacts or instantiations serve as physical proofs of concept for the design theories. These objects translate abstract design theories into actionable frameworks. By interacting with these objects in the real world, users can verify the efficacy of the design, giving credibility to the design theory. Third, not all designs work as intended. Design artifacts are therefore not only proofs of concept, but feedback mechanisms. This allows for the examination and modification of a design without necessarily impacting the existing artifacts or instantiations. Just as an architect may amend a building plan for future buildings without physically altering the construction of the existing one, designers can revise their design theories to optimize future outcomes without directly changing the objects already produced. We can consider each instantiation as a tangible case study, offering insights into the design’s effectiveness. The successes or shortcomings observed in existing objects can feedback into the redesigning of the theories, leading to continual improvement in the designs themselves. This cycle is the design science research cycle, and we will discuss it further in a later section.
# Developing design theories
As explained in the preceding section, a design theory is a complete articulation of a design (Gregor & Jones, 2007). A simplification of Gregor and Jones’s (2007) anatomy of an ISDT is provided below consisting of eight components:
- Purpose or problem: the exact issue the design aims to address/improve upon
- Success conditions: the conditions that must be true for us to know that this problem is “solved”
- Context: the domain of the problem at hand
- Users: who will be working with the results of the design, and what they’re trying to do
- Basis: the argument for the design; the evidence the designer draws from to assert that their design can and will work to achieve the purpose or problem for the users in this context
- Principles: the core, basic ideas that the design is leveraging for its solution, composed of constructs, affordances, and tests
- Constructs: the features, mechanisms, tools, and other artifacts that are the objects of the design
- Affordances: what use of the artifacts does for the user to implement the principles and address the purpose/problem
- Success conditions: what conditions must be true if the constructs successfully fulfill the affordances
- Tests: evaluations that the designer and/or user can carry out to assess if the success conditions are met
To develop a novel design theory for a given problem, start from the first component, and then work through the rest in sequence. Note that design principles are therefore derived from some basis: the justificatory kernel theories drawn from experience or expertise that indicates the potential of the design principles. It can be helpful to present design principles in tables of the following form:
Table 1. Presenting design principles.
Principle Constructs Affordances Success conditions Tests A prescriptive statement that indicates how to do something to solve a problem or achieve a goal Features or functions of the designed artifact that instantiate the design principle The beneficial allowances that result from the construct, in terms of the principle Conditions that must be true if the construct’s affordance fulfills the design principle How to assess whether the success conditions have been satisfied # Example of a design theory
In this section, we will demonstrate the development of a design theory by adapting the Technology Acceptance Model (TAM) developed by Davis (1989) into the components described above. TAM is a widely-cited design theory in Management Information Systems that explains users’ acceptance and usage behavior towards technology.
- Purpose or Problem: TAM was developed to understand why users accept or reject information technology and what factors influence users’ technology acceptance.
- Success Conditions: TAM is successful when it accurately predicts or explains an individual’s acceptance of an information technology or system. It should be able to guide system design such that it appeals to user acceptance factors.
- Context: TAM is applicable in various contexts where individuals interact with technology or are introduced to new technological systems or tools.
- Users: Typically, the users of TAM as a design theory are IS designers, system analysts, organizations introducing new technologies, and researchers examining technology adoption and usage behavior.
- Basis: TAM is grounded in Fishbein and Azjen’s (1975) theory of reasoned action and extended to accommodate the acceptance of information systems (Fishbein & Azjen, 1975). The theory of reasoned action is supported by numerous empirical validations and serves as a theoretical foundation for subsequent research proposing extended and modified versions.
- Principles: TAM suggests that Perceived Usefulness (PU) and Perceived Ease of Use (PEU) determine an individual’s intention to use a system. These principles are developed into constructs, affordances, success conditions, and tests in the table below.
Table 2. Design principles for the Technology Acceptance Model (Davis, 1989)
Principle Constructs Affordances Success Conditions Tests Ensure users perceive the technology as useful Features that allow the user to accomplish things they were not otherwise capable of Helps the user see how the technology enhances their capabilities System is recognized as beneficial to tasks or work User survey on perceived benefits Features that directly address user needs Helps the user associate the technology with their personal objectives and tasks Features are relevant to end-users and solve their problems Task completion tests; user feedback on feature usefulness Ensure users feel that the technology is easy to use Intuitive user interface Users are able to immediately use the technology without experiencing confusion or unexpected results Users can interact with the system easily Usability testing; user feedback on interaction ease Instructional guides or help features within the system Users feel capable of teaching themselves how to use the technology to the fullest Users improve their own ability to use the technology over time Longitudinal user feedback on technology utility; user support requests data In essence, TAM posits that a system is more likely to be accepted and used by individuals if they perceive it to be useful and easy to use.
# The Science in and of Design Science
Now that we have a deep understanding of how we develop design theory, let’s explore how we combine design with science to study what we make. In this section, you will learn how the scientific method is applied to test designs, review the design science research process, and see how this is put into practice via a hypothetical example.
If you’re familiar with the philosophy, processes, and practices of science, the “science” of design science is actually quite straightforward. Design science simply employs the scientific method to validate design theory. In other words, it essentially follows the scientific cycle of making an observation, developing a hypothesis that explains that observation, empirically testing that hypothesis, then making more observations based on our tests. This is the fundamental rigorous cycle of scientific knowledge. What makes design science unique from the other sciences, however, is that it involves two additional cycles: a design cycle and a relevance cycle (figure 5). For our current purposes, the details of these cycles do not matter. Simply note that these interlocking cycles drive advancement in three places at once:
- Through the rigour cycle, we develop a better understanding of the world, particularly about the effects and natures of technology.
- Through the design cycle, we build better products, processes, and programs (design artifacts) and improve our ability to evaluate their efficacy.
- Through the relevance cycle, we improve our understanding of problems and opportunities and help to make progress on both.
The Three-Cycle View of Design Science Research, adapted from Hevner, 2007. To conduct design science, Peffers et al. (2007) offer a robust Design Science Research Methodology (DSRM; figure 6). The DSRM is a commonly used structure for creating and evaluating constructs and artifacts within the Information Systems discipline. It consists of six key activities, which can be executed in a largely sequential fashion but also allows iterative feedback loops:
- Problem Identification and Motivation: Define the research problem and substantiate the relevance of the solution. A comprehensive understanding of the problem is developed through landscape review, gap identification, and insight into why and how an existing problem affects stakeholders. (This is where a design theory’s problem/purpose, context, and users might be defined.)
- Objectives Definition for a Solution: Infer the objectives of a solution based on the identified problem. The solution’s requirements and effects are established, clarifying the success conditions for the resulting designed artifacts.
- Design and Development: Create an artifact for a specific problem. This is where design principles and their constructs, affordances, success conditions, and tests would be defined.
- Demonstration: Demonstrate the use of the artifact in a suitable context. This phase involves use-case scenarios, simulations, or detailed examples to explain how the designed artifact can be applied to solve the stated problem. This stage is our opportunity to see if our design results in artifacts that satisfy the success conditions using evaluation (the next step in the DSRM).
- Evaluation: Evaluate the artifact. The artifact’s performance in solving the problem is evaluated using suitable methods as specified in the design theory and principles, which may be observational, experimental, analytical, testing, or descriptive, depending upon the research context and the artifact itself.
- Communication: Communicate the problem, the artifact, the novelty, the usefulness, and the effectiveness to audiences. This involves consolidating and disseminating research outcomes for its intended audience— it could include academic peers, as well as industry practitioners.
Peffers et al. (2007) suggest that researchers may enter this process at different stages depending on the nature of the research, using what they call “entry points”. The research activities are not strictly sequential but typically recur iteratively as the research progresses, allowing the researcher to loop back to previous stages based on the findings at a current stage.
The Design Science Research Methodology (DSRM) process model, from Peffers et al., 2007. # An example of the Design Science Research Methodology
Here we explore a example of the DSRM. Consider the following hypothetical example focused on a manufacturing company..
- Problem Identification and Motivation: Suppose that inefficiencies in resource allocation within a manufacturing company are causing increased costs, production delays, and waste. Changes to production plans and simple errors both cause schedules to shift unpredictably in mid-production. This means that the company must become better at dynamically adapting the manufacturing schedule.
- Objectives Definition for a Solution: The company needs to become better at adapting the manufacturing schedule in realtime to keep up with changes and unexpected issues. A new design will be successful if changes can be made to the schedule in the middle of operations such that those changes and their consequences are appropriately propogated downstream to all aspects of production affected by the change.
- Design and Development: Given the above problem and objectives, the design team begins by exploring the basis for potential solutions to this challenge. They know that recent advancements in data analytics, machine learning, and AI have led to algorithms that are effective at modelling complex problems and, given new input data, can adapt the model to suit. So, the team develops design principles focused on a model-based adaptive scheduling algorithm that can draw on realtime data to maintain a model of the production schedule and adapt it to inputted changes, dynamically scheduling resources to minimize costs and delays.
- Demonstration: The AI-based scheduling tool is put to use in a simulated environment reflecting real-world conditions and constraints of a production factory. Various scenarios are run to assess the performance of the AI tool across a range of situations, including both regular and high-stress (rush orders, limited resource availability, etc.) production cycles.
- Evaluation: Using a test dataset reflecting both regular and outlier conditions, the performance of the AI tool is evaluated by comparing it against manual scheduling outcomes and outputs from traditional non-AI scheduling software. Common evaluation metrics could include fulfillment timeline, cost efficiency, and resource utilization rate. The results of these evaluations are used to validate our initial hypotheses.
- Communication: If the AI scheduling tool is successful in significantly improving resource allocation efficiency, the results would be disseminated in a stream of research papers. These would detail the problem, the tool design, the tests performed, the evaluation methodologies, and the resulting impact of the AI tool. This information would be invaluable to both academics studying AI applications and industry practitioners in manufacturing.
This hypothetical project demonstrates how the development of a design theory follows DSRM process, thus ensuring a comprehensive and methodical approach to problem-solving that contributes to rigour, design, and relevance.
# The impact and ethics of design in Management Information Systems
There are professions more harmful than industrial design, but only a very few of them
- Victor Papanek (1923-1998), advocate for socially and ecologically responsible design (Papanek, 1997)
Imagine a company developing an MIS to optimize its hiring processes. The system incorporates algorithms intended to screen candidates rapidly. However, the algorithm is inadvertently biased against candidates from certain demographics, leading to discriminatory hiring practices. This scenario unfolds silently, masked by the perceived impartiality of technology. The repercussions extend beyond individual missed opportunities; they perpetuate systemic inequalities and erode trust in technology as a tool for fairness. This hypothetical example illustrates the potential consequences of neglecting ethical considerations in MIS design—a failure that can reinforce societal disparities and undermine ethical standards.
At this point, it is worth underscoring that every decision is a design decision. In Management Information Systems, that means that every decision should be guided by design to encourage the usability, accessibility, efficiency, and functionality of our information systems. The issue with this is that for many kinds of decisions, our design theories are implicit. That means that we have not put in the work to understand, articulate, and make tangible the problems we’re solving, who we’re solving them for, the context we’re working within, what success looks like, or what ideas underpin the success of our designs — let alone how we know if those ideas are being properly implemented. Unfortunately, the consequences of bad design are like an iceberg. Some of the challenges caused by bad design (for instance, a poorly-performing system that is hard to use) will be easy to see. However, as technology is increasingly and intractably woven into the fabric of business and society, some of the most significant ramifications of poorly-designed systems are now hidden beneath the surface of these tools. These decisions dictate how systems interact with users, influence organizational processes, and potentially affect broader societal norms. Given the ubiquitous integration of MIS in daily operations and personal lives, these systems can shape behaviors, protect or endanger privacy, and foster or hinder inclusivity. The weight of these decisions places a moral imperative on designers and developers to recognize and wield their power responsibly. As a result, good design is not only paramount for making effective and efficient systems — it is also essential for making ethical systems.
In the book Ruined by Design, Monteiro (2019) argues that designers are not neutral parties in the creation of technology; they are gatekeepers of ethics and morality. Design in MIS is not merely about creating efficient systems; it’s about making decisions that affect users’ lives, data privacy, and societal norms. Monteiro emphasizes that every design choice wields power and influence, embedding within it the potential for significant societal impact. This underscores a multifold responsibility: To not only fulfill business objectives, but also to safeguard user rights, to anticipate long-term systemic effects of changes to technology, and to advocate for positive change. Remember the key lesson at the beginning of this module: we are all designers. Thus, as professionals working with MIS, we must extend our purview beyond using data and systems to consider the ethical implications of our work. This includes respecting user privacy, ensuring data security, and actively resisting features that could exploit or harm users or other stakeholders.
To navigate the ethical complexity of design, adopting a structured framework is essential. Monteiro (2019, chapter 1) advocates adopting the following designer’s code of ethics:
- A designer is first and foremost a human being.
- A designer is responsible for the work they put into the world.
- A designer values impact over form.
- A designer owes the people who hire them not just their labor, but their counsel.
- A designer welcomes criticism.
- A designer strives to know their audience.
- A designer does not believe in edge cases.
- A designer is part of a professional community.
- A designer welcomes a diverse and competitive field.
- A designer takes time for self-reflection.
By adopting such a code of ethics, we are challenged to consider the potential ramifications of our designs, making decisions that stretch beyond functionality or aesthetics, but also addressing issues of privacy, accessibility, sustainability and safety. By encouraging criticism, user-centeredness, professionalism, diversity, and self-reflection, the code of ethics also gives us mechanisms and behaviours to ensure we can support and protect this ethical orientation.
Nonetheless, the challenge of implementing these ethical foundations in the real world is significant. It requires:
- Education and Awareness: Professionals must be educated on the ethical implications of MIS design, including ongoing training and development.
- Ethical Leadership: Organizations need leaders who prioritize ethical considerations in strategic decisions and who can serve as ethical role models.
- Policies and Governance: Robust policies and governance structures should be established to enforce ethical practices in design of MIS.
- User and Stakeholder Advocacy: Establishing roles or teams dedicated to human rights and ethical design should ensure that user and stakeholder welfare is a priority in system design and implementation.
In an era where technology shapes almost every aspect of our lives, the ethical considerations of MIS design are paramount. It is clear that the task of creating ethical systems is complex but critically important We, as professionals and designers in this field, have a profound responsibility. We are not only designing systems; we are designing the future experiences of everyone who interacts with or is affected by those systems.
# Conclusion
In this note, you explored the design of information systems, learning to develop design principles and design theories, to separate design from designed artifact, and to study the effectiveness of our designs via design science research methods.
Design is fundamentally a process of decision-making, a means of knowing what holds importance and what does not. It is through the act of ‘designation’ that we make design decisions which lead to the creation of valuable artifacts. To design is an act of ‘deciding’ what matters. This point underscores the significance of MIS design decisions in shaping systems. Remember that every tool has an implicit design theory, and being explicit about these theories allows us to build, adopt, and use these tools more effectively, efficiently, and ethically.
Design science builds on the idea of applying scientific methods to the realm of design, fostering a rigorous, evaluative, and iterative approach towards developing design theories and principles. The frameworks developed from exiting theories guide the formation of novel and useful artifacts, solutions to recurrent problems within their specific domains.
Do not forget about the importance of ethical considerations in design. Design is not just about determining how to make the most performant systems. Design is also about appreciating the complex ramifications of the choices we make during the design process. Design ethics challenge us to be accountable for our design decisions and to anticipate their consequences.
# Key points to remember
- To design is to designate; to mark. Design is about deciding what is important (and what is not).
- Design science is the application of the scientific method to design theories, principles, and artifacts.
- All tools have design theories, though they may be implicit.
- You have and use design theories, though they may be implicit.
- Being explicit about our design theories helps us to build, adopt, and use tools more effectively.
# References
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Dresch, A., Lacerda, D. P., & Antunes, J. A. V. (2015). Design science research. In A. Dresch, D. P. Lacerda, & J. A. V. Antunes Jr (Eds.), Design science research: A method for science and technology advancement (pp. 67–102). Springer International Publishing. https://doi.org/10.1007/978-3-319-07374-3_4
Fishbein, M., & Azjen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Addison-Wesley.
Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. MIS Quarterly, 37(2), 337–A6. https://www.jstor.org/stable/43825912
Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 25. https://aisel.aisnet.org/jais/vol8/iss5/19/
Gregor, S., Kruse, L., & Seidel, S. (2020). Research perspectives: The anatomy of a design principle. Journal of the Association for Information Systems JAIS, 21, 1622–1652. https://doi.org/10.17705/1jais.00649
Hevner, A. R. (2007). A three cycle view of design science research. Scandinavian Journal of Information Systems, 19(2), 87–92.
Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105. https://www.jstor.org/stable/25148625#metadata_info_tab_contents
Keil, M. L. M., & Mark. (1994). If we build it, they will come: Designing information systems that people want to use. MIT Sloan Management Review, 35(4), 11–25. https://sloanreview.mit.edu/article/if-we-build-it-they-will-come-designing-information-systems-that-people-want-to-use
Lukyanenko, R., & Parsons, J. (2013). Reconciling theories with design choices in design science research (pp. 165–180). Springer, Berlin, Heidelberg. https://link.springer.com/chapter/10.1007/978-3-642-38827-9_12
Monteiro, M. (2019). Ruined by design: How designers destroyed the world, and what we can do to fix it (p. 221). Independently published.
Papanek, V. J. (1997). Design for human scale. Van Nostrand Reinhold.
Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45–77. https://www.tandfonline.com/doi/abs/10.2753/MIS0742-1222240302
Walker, R. (2024). The guts of a new machine. The New York Times, 78. https://web.archive.org/web/20240204042142/https://www.nytimes.com/2003/11/30/magazine/the-guts-of-a-new-machine.html
-
Data, Information, Knowledge, Understanding, and Wisdom
t
# Data, Information, Knowledge, Understanding, and Wisdom
# Why does data matter?
Before we discuss what data is and how it is used, we should strive to understand why data matters.
The Data, Information, Knowledge, Understanding, and Wisdom hierarchy is a simple mental model useful in appreciating the role of data in management and innovation. This hierarchy was first formally developed by Russ Ackoff, a forerunner in the study of management information systems. Ackoff described the hierarchy in a 1988 address to the International Society for General Systems Research, and it was reproduced by the Journal of Applied Systems Analysis in 1989 (Ackoff, 1989). There have since been many interpretations and iterations of the hierarchy (usually in the form of “DIKW”). In this article, I present my own, in which I strive to present the hierarchy as pragmatically as possible. You will see how the DIKUW hierarchy is a fundamental paradigm underpinning how we manage and innovate with data.
There are actually six levels to the hierarchy. It begins with phenomena: things that exist and are happening in the world. We turn phenomena into data by observing and capturing that observation. We turn data into information by adding context to it. We turn information into knowledge by applying the information to something. We turn knowledge into understanding by critiquing the knowledge, diagnosing and prescribing problems with our knowledge in order to learn. Finally, we turn understanding into wisdom by formalizing our understandings in the form of theories. In the next section we discuss each of these levels in more detail.
# Phenomena
Before we can have data, we must have something to represent with data. That something is phenomena: things that are happening in the world. In fact, the world itself is made of phenomena. Phenomena are the material, concrete stuff affecting and making up the world.1 When we observe phenomena, they become something we can think about: data. In other words, underlying all wisdom, understanding, knowledge, information, and data, are the phenomena we are trying to observe and capture.
To make this more concrete, consider buying a coffee at a café. Almost certainly, that café has an information system of some kind supporting its business. When you purchase that coffee, the information system models at least one kind of phenomena: money. So what kind of data may be generated about that transaction? The value of the sale, the proportion going to taxes, and how the value of the sale adds to the café’s revenues for the day. If you have a membership or rewards card for the café, the information system also models a different phenomenon: you. It registers that you (or at least, someone who has your rewards card, or who knows your rewards number) made a purchase. It probably knows what that purchase was, and associates the kind of coffee you bought with its model of you. All of these phenomena are observed and potentially captured by the café.
Reflect: what other kinds of phenomena are interacting with the café? (I.e., what else might people do while they are there?) Which of these might be interesting for the business of the café?
# Data
When we observe something about the world, and especially when we record that observation (in the form of a note, or an image, or a value in a spreadsheet or database), we capture phenomena in the form of data. When a café’s information system observes and captures records of you and your purchases, it is creating data to represent those phenomena. Data helps us ask the simplest of questions about phenomena, even if we didn’t observe them directly. What has happened? When and where did it happen? Who was involved? For instance, an analyst working for the cafe can use the records created by its information system to see how many coffees were purchased in the past month without having to be present to observe each purchase themselves. This is a simple and obvious idea, but it is incredibly powerful.
You might note at this point that this observing and capturing need not depend on a computer system. Indeed, workers at the café could simply write down each purchase, its value, and the number on your rewards card. An analyst later could conceivably look back through written ledgers to re-observe your purchase. What computers do is make it much easier for someone to consume data: to use the data to do something.
After all, as Palmer (2006) observes, “data is the new oil”. Technological and methodological innovations of the past several decades have turned data into an invaluable resource that many prize as the ultimate modern good. However, Palmer also notes data’s important caveat: it is not useful by itself. Just like crude oil, data needs to be refined in order to be useful. Computers help us to query (that is, to look through) and refine data in order to use it.
# Information
When we first review data, we add context. We relate pieces of data with one another, including metadata (data about data, such as how the data was captured) and our own perspective (such as why we’re reviewing the data to begin with, our assumptions about the data, or other things that we’re relating the data to in our minds).2 In doing so, we create information. Information is data imbued with meaning (Bellinger et al., 2004) — and information is useful.
Information helps us ask basic questions about the present, current world. What is happening? When and where does it happen? Who is involved? Most importantly: what happens if we do this or that? As a result, we can combine and compare information to produce patterns (if that happens, then this happens) and to find outliers.
Consider the café example again. A simple question the café might have is “what products are the least popular?” Since the café has been recording each purchase made, this question is answered with a simple query of the purchases data. An analyst using this data would be able to quickly return a ranked ordering of all products by volume. This information could be very useful for a café looking to simplify its operations or cut down on costs.
# Knowledge
Knowledge is produced when we apply information, especially when we develop ways to apply information systematically (i.e., when we can create instructions for others to act on similar information in similar ways to produce similar results). For instance, when we find a pattern (if that happens, then this happens) we can say (if that happens again, we should expect this, and therefore do [something]). Or, if we do this, then that will happen. Knowledge therefore helps us to start asking the most important kinds of questions: questions about futures.3 What will happen? When and where will it happen? Who will be involved? What if we do this or that?
With a ranked list of product purchases by volume, it would be easy for a café manager looking to cut costs to identify which purchases to remove from service. They can ask the question, “What would happen if we removed the three least popular products from our product lines?” and answer it with “It would have a negligible effect on revenues.”
# Understanding
Understanding is produced when we develop knowledge about knowledge. As we interact with knowledge, we begin to detect gaps, problems, and errors. For instance, if we expected this to happen, but that happened instead. Understanding is therefore diagnostic: it allows us to detect, identify, and investigate problems and errors. It is also prescriptive: it allows us to identify, specify, and theorize about what we now need to know (Ackoff, 1989).
For instance, through understanding, we develop the best kinds of questions to ask about a given problem, the most valuable types of tests to try on a new idea before we implement it. We begin to know what we don’t know — and what we need to know.Understanding is therefore the difference between learning and memorizing. It is simple enough to memorize a set of instructions (i.e., to memorize knowledge). Some instructions are simple. How to tie a set of shoelaces is a good example. To solve the problem of tying your shoelaces, it is sufficient to have the knowledge of how to tie a set of shoelaces. You can then apply those exact same instructions to every set of laced shoes you will ever encounter. No further knowledge is necessary. Other instructions, however, are complicated: consider launching a rocket to geospatial orbit. The instructions to do so were quite tricky to figure out. As a civilization, we have gotten fairly good at launching rockets, but even now, some of our rocket launches nonetheless fail. In these complicated problems, memorization is insufficient. This is where the diagnostic and prescriptive role of understanding becomes important. To solve complicated problems, we must not only apply previously-memorized knowledge, but also learn how to seek new knowledge in order to make progress. Thus, we must be able to detect, identify, and investigate gaps and errors in our knowledge — and to specify and theorize about what kinds of knowledge we need to obtain in order to resolve those gaps and errors.
To illustrate the value of understanding, return again to the hypothetical café. You may have thought that there are other kinds of questions to ask: who purchases those products? When, and why? Perhaps the least frequently purchased products is a milk steamer, bought by parents of young kids who are also buying a number of other products each transation. If the steamer were removed from the menu, maybe those parents would take all of their other purchases elsewhere. Thus, a system tracking not only products purchased but also who purchases them may be used by an analyst to generate higher-quality knowledge from more information. That analyst, however, needs to have an understanding of the phenomena of the domain — that is, the relationship between different kinds of customers and different kinds of products — and must use that understanding to critically question the knowledge they are seeking from the café’s information system.
# Wisdom
Ackoff (1989) notes that data, information, knowledge, and understanding help us increase efficiency. That is, given a particular predetermined goal, we use data, information, knowledge, and understanding to attain that goal, and to attain it more predictably and with fewer resources (time, money, or whatever). In other words, these are operational layers. With more phenomena, more data, more information, more knowledge, and more understanding, we can do more, better, faster, and more easily. However, how do we judge what we should be doing and how we should be doing it? Ackoff (1989) argues that effective judgment comes from wisdom: the development and application of values. He therefore places wisdom at the “top” of the DIKUW hierarchy. However, Ackoff (1989) does not describe where wisdom comes from nor how it is developed with respect to phenomena, data, information, knowledge, or understanding. For that, we turn to the discipline of design.
To design is to designate; to mark or give status to something, in order to decide what is important about a given thing. In other words, deciding upon values is a design decision. Liz Sanders, a scholar of design and design research, argues that the transformation of data into information, knowledge, and understanding is an analytical process: that means it involves investigating and breaking down the things that we’ve observed (Sanders, 2015). Sanders explains that wisdom guides the analysis of data through the mechanism of theory: our higher-level explanations, predictions, and prescriptions of how the phenomena we are interested in work (Gregor, 2006). For instance, our theory of our café’s operations might include ideas like “more product lines mean more effort involved in making them” and “fewer purchases of a product indicate that a product is not a valuable offering”. Similarly, when we organize and make sense of data, information, knowledge, and understanding, we synthesize and generate new theory — and therefore build up our wisdom about the phenomena we are interested in. So, when the café’s managers apply the ideas mentioned immediately above to the information that milk steamers are low-frequency purchases, they may conclude with a prescription: cut milk steamers from the menu in order to reduce operational complexity and cut costs while having minimal impact on the café’s ability to provide value to its customers. So, that is where wisdom comes from: building up and formalizing our understandings about phenomena into theories that explain, predict, and prescribe those phenomena.
# So why does data matter?
Crucially, phenomena happen whether or not we observe them. A coffee company’s regular customers may start feeling less satisfied about their newly-purchased beans whether or not the company is seeking feedback on their latest roast. Similarly, data means nothing unless we add context to it. Worsening reviews of the coffee company’s latest roast becomes useful data when we combine it with the realization that something about the roasting process has changed. Likewise, information is useless unless we apply it. If the coffee company’s customers are liking the latest roast less, maybe their blend of coffee beans needs further fine-tuning. However, our knowledge might be wrong. Maybe it wasn’t the bean blend, but the roasting process, or a competitor’s latest light roast, or a change of season (coldbrew, anyone?). We must understand the problems we’re dealing with as deeply as possible — to recognize if they are even the right problems to solve. After all, if the coffee company’s regular customers like the latest roast less … but they are bringing in many new customers with the change in approach, maybe it is the right thing to do.
The truth is that data matters, but only if we are collecting the right data. And even then, data doesn’t matter — not without information, knowledge, understanding, and wisdom.
That, however, leads us to a more nuanced question: how do we know if we have the right data? In fact, in many contexts, it’s not a case of “right data” or “wrong data.” Instead, we think of data quality as a spectrum. Moreover, there is no one way of thinking about data quality. Therefore, we must consider different data qualities, or dimensions of data quality, and make judgements about which ones matter the most for whatever we’re trying to achieve.
# From data quality to data qualities
Wand and Wang (1996) surveyed a set of notable data quality dimensions, finding that the five most cited dimensions were accuracy, reliability, timeliness, relevance, and completeness. Data accuracy measures the degree to which data precisely captures the phenomena it is supposed to represent. For example, if a transaction processing system were to record a $4.95 transaction as $5, it would be imprecise. Reliability is related to accuracy — it refers to the degree to which data correctly represents the phenomena it was meant to capture. The $5 transaction previously mentioned is not necessarily incorrect, even if it is imprecise. Timeliness refers to the degree to which captured data is out of date. A count of a population — say, the number of polar bears living in the wild — is out of date as soon as a polar bear is born or dies. Relevance is relative to what the data is used for. A count of the population of polar bears living in captivity is largely irrelevant if a data user is wondering about the health of wild polar bear populations. Finally, completeness is the degree to which captured data fully represents a phenomenon. For instance, a transaction processing system may only capture money in and money out, but a more complete record of each transaction would include what was bought or sold, who bought it, the transaction type, and so on.
However, while these five dimensions are perhaps generally the most important for information systems design and use, Mahanti (2019) has demonstrated that there may be many more dimensions that matter to a given data producer or data consumer. She has identified a total of 30 dimensions of data quality (Mahanti, 2019, p. 7) that are worth considering for any information systems. By definition, anyone using an information system will be collecting data and trying to use that data to produce information, knowledge, understanding, and possibly wisdom. So, they should ask themselves: which of these are the most important for my project? Then: how do I know if this data is good enough for my purposes? Finally: how might I make sure I improve my data on these dimensions in the future?
# What about ChatGPT?
The latest advancements in data capabilities — namely, ChatGPT and similar generative AI tools — are excellent examples underscoring the significance of the DIKUW hierarchy. These tools are perhaps the greatest example we have ever had of the power of data. Ask ChatGPT, Microsoft Copilot/Bing, or any of their competitors a question and you will generally get a sophisticated, conversational answer in response.
These tools are probabilistic generators. This is how they work: given some input (a “prompt”), they return the most likely response to that input, based on the patterns found in their training data. Their responses are injected with a little bit of randomness (depending on their “temperature,” this can be more or less random) so that they rarely offer the exact same response to the exact same input. These tools are able to achieve this functionality because they are fundamentally built of data. Basically, researchers pointed very powerful computers at the vast amounts of data that now exists on the Internet and trained those computers to learn the patterns found in that data.
Given a prompt, these tools can do wondrous things with data, transforming it into information and knowledge. However, these tools need understanding in the form of well-constructed prompts in order to produce the most useful information and knowledge. Moreover, they must be paired with a wise human in order to produce useful output, as they have no real concept of their own gaps and errors. Uncritical use of the output of these tools has had disastrous consequences for foolish (or at least unaware) users, as they will completely make up falsehoods while seeming absolutely confident that the output they’re producing is correct. (This is called “hallucinations.”)
This is where the DIKUW hierarchy must be re-emphasized: the quality of output of these probablistic generators varies greatly depending on the quality of input. Thus, users of ChatGPT and similar tools have learned that there are ways to artfully craft or even engineer prompts for these tools. We are now seeing the emergence of markets for high-quality prompts (e.g., https://promptbase.com/) and training for prompt engineering (e.g., https://github.blog/2023-07-17-prompt-engineering-guide-generative-ai-llms/). Working with “AI” has therefore become a modern skill that may grow in relevance and value as generative tools become more prominent.
-
Note that this objectivity does not mean that things that are conventionally “subjective,” such as someone’s opinion about something, is not a phenomenon. Indeed, once someone has formed such an opinion, they have materialized it into a phenomenon that can be objectively observed (such as by a listener hearing the opinion) and captured as data. ↩︎
-
This is particularly important in serendipity, as serendipitous discoveries are the result of both an observation and a valuable association of that observation with some other idea. ↩︎
-
Bellinger, Castro, and Mills (2004) interpret Ackoff as saying that wisdom is the only dimension of the hierarchy concerned with the future (“Only the fifth category, wisdom, deals with the future because it incorporates vision and design”, para. 4). I disagree, however: Ackoff writes about how knowledge describes instruction, and instruction requires prediction, and therefore some expectation about how current actions will influence future results; similar for understanding. Moreover Bellinger et al. are inconsistent in this interpretation, as they later describe similar ideas about knowledge and prediction. ↩︎
-
-
Softerware - techniques as technology
As we develop our techniques and practices in knowledge innovation, we tend to find certain workflow patterns that we do over and over again. Knowledge workers are increasingly finding ways to augment these patterns with automations, macros, shortcuts, and other such tool add-ons. In the process of developing and refining these patterns with automation, we are writing “softerware”. Softerware is the layer of practices and protocols between us — users — and the tools we are using to achieve our goals.
For instance, my approach to literature review is basically a semi-systematic literature review (Okoli, 2015). When I have a research interest that I’ve not explored in detail before, I begin by searching the same several databases with the same search techniques, opening each potentially-interesting item in a new tab until I’ve reached some point of saturation (e.g., I am no longer finding interesting-looking items relevant to my interest at the time). Then, I’ll go through each tab, screening each article more critically. If an article passes my screen, I’ll then add it to my collection of literature on the subject by saving it (and its metadata) with Zotero, and finally I import the newly-collected items into Bookends.
There’s a lot of hand-waving there, but the details don’t really matter. What I’ve highlighted in the above description of my workflow are the elements of softerware: the events where what I do depends especially on how I do it. These events are important because, over time, that causal relationship runs in both directions. Over time, I change how I do something, and that influences what I do.
Another important feature of softerware is that it tends to be unpublished. These workflows are crafted in private, perhaps not explicitly or even intentionally by the worker. So the best softerware in the world may not be known to anyone but the person who made it!
-
On serendipity and knowledge
A great debate in the philosophy of knowledge (where knowledge is defined as “justified true beliefs”) is known as the “Gettier problems.” The debate is this: if you think you know something, and that something turns out to be true, but not for the reasons you thought … does it count as knowledge?
I tend to agree with the pragmatic view of Gettier problems. Basically, the only thing that matters is whether used knowledge is fruitful for the reasons that the knowledge was justified, true, and believed.
This has implications for serendipity. In serendipitious observations, the knowledge we generate was not necessarily justified or believed a priori. Only in retrospect does the “knowledge” become useful.
-
A PopClip extension for highlighting text in Obsidian
A simple extension for PopClip that will present an “insert highlight” icon when you select text in Obsidian.
1 2 3 4 5 6 7 8
#popclip name: Highlight required apps: [md.obsidian] requirements: [text, cut] actions: - title: Highlight # note: actions have a `title`, not a `name` icon: iconify:ant-design:highlight-twotone javascript: popclip.pasteText('==' + popclip.input.text + '==')
-
Theory of Systemic Change and Action
Theories of Change are one of the fundamental tools of changemakers and program evaluation (Mackinnon, 2006). However, when addressing wicked problems (Rittel & Webber, 1973, theories of change are too reductive and linear to properly account for the systemic phenomena, structures, and dynamics that perpetuate the issues we’re trying to address (Murphy & Jones, 2020).
Theories of Systemic Change and Action (ToSCA) are a systemic design tool that combine theories of change with systemic understanding. The result is a theory of change that is useful for understanding, communicating, and evaluating systemic change projects.
Here’s a rough guide to develop a ToSCA:
- Model the system (e.g., with causal loop diagrams; Kim, 1992).
- Develop systemic strategies from the model.
- Reorganize the modelled phenomena. From left to right:
- Capability building and resource mobilization for the initiative (Inputs)
- Inteventional activities the initiative will take on (Activities)
- Immediate outputs of those activities (Outputs)
- Results of those outputs on the overall system (Outcomes)
- Downstream effects of those outcomes on higher-system structures (Impacts)
- Reiterate on step 3 as necessary.
The resulting diagram will look somewhat like an iceberg model (Stroh, 2015, p. 46] on its side: visible events and behaviour are on the left, while the actual patterns and structures in the system fall to the right.
The ToSCA can then be simplified as necessary to suit different needs. For instance, if presenting the model briefly to a potential funder, you may want to collapse major feedback loops into one element on the model with a “loop” icon. This way you can still show inputs and outputs on that loop while obscuring the complexity within it for the purposes of the presentation.
-
Systemic Evaluation
Systemic evaluation is the developmental evaluation (Guijt et al., 2012) of systemic change.
Techniques for systemic evaluation combine conventional principles and tools of developmental evaluation with concepts from systemic design. These techniques provide changemakers with the ability to assess the accuracy and completeness of their theories of systemic change and action (Murphy & Jones, 2020). They also allow evaluators to examine the progress of systemic strategies (Murphy et al., 2021).
-
Systemic strategy
Systemic strategies use system phenomena, structure, and dynamics to help changemakers achieve their goals. These goals may simply be some specific outcome or objective, or they may include systemic change.
One approach to designing systemic strategies is:
- map the system
- analyze the system for features of leverage, possibly using leverage analysis
- identify any “goal” phenomena: the events or behaviours you seek to change
- identify any “intervention” phenomena: things you have direct influence over
- “walk” from the the interventions to your goals to identify a theory of change, incorporating the features of leverage you find along the way
Each pathway you walk forms a strategy tree.
Are there other interventions that lead to the same goal? These are different roots for the same overall strategy.
Strategy trees can be combined into a strategy “forest”. A strategy forest is a collection of paths from interventions to goals in the system. Strategy forests can be assessed for different qualities to gauge which strategies an initiative should pursue.
- Different strategies that share common interventions may be the easiest to invest in and implement.
- Combinations of strategies that are the self-perpetuating (e.g., that contain feedback loops that will innately drive their success) may be more valuable to pursue.
- These forests can also be tested (e.g., with wind-tunneling; @VanderHeijden1997Scenarios-Strategy-Strategy-Process, p. 23) to identify the best combinations of strategies to follow.
Once strategies have been selected, identify the capabilities or resources that need to be invested in/mobilized in order to effectively target the chosen interventions, and set up systemic evaluation processes to continually test the completeness and accuracy of your strategic theories and to assess progress towards achieving the strategies’ goals.