• Creating a DIKUW chart

    Last updated Sep 25, 2024 | Originally published Sep 25, 2024

    As described in Data, Information, Knowledge, Understanding, and Wisdom, we make data from phenomena we observe in the world. When we add meaning to data, we gain information, which helps us make decisions and take actions. When we put information into practice and learn from the process, we gain knowledge. When we combine knowledge with other knowledge — especially knowledge about knowledge, such as knowledge about whether the quality of knowledge we have is good, or knowledge about how to apply knowledge — we gain understanding. Each of these levels help us with efficiency (Ackoff, 1989): we use them to attain a specific goal, and as we gain data, information, knowledge, and understanding about that goal, we get better at attaining it. That is, we learn to attain that goal more predictably and while consuming fewer resources each time. However, it is no use to be efficient if we are efficient about doing the wrong thing. To be effective, according to Ackoff (1989, is to do the right thing. Effectiveness is about attaining the most valuable goals. Data, information, knowledge, and understanding can only help us be more efficient in the attainment of goals — but to judge whether those goals are effective, we need wisdom. Design is the art and science of designating importance: of finding and framing the right problems, determining the criteria for solving those problems, and solving them by finding solutions that best fit those criteria (Simon, 1995). Wisdom is therefore produced by applying design to understandings. We do that by asking (and seeking the answers to) questions like “What problems are most important?” “Which gaps in our knowledge are critical?” “What do we not understand or know?” “Where are we wasting efficiency on the wrong goals?” Questions like these form the roots and foundations of the theories we use to explain and predict the world and prescribe the most valuable goals and how to attain them efficiently.

    Okay: so what? How do we use these ideas about data, information, knowledge, understanding, and wisdom (DIKUW) to better ourselves and our organizations? A DIKUW chart is a simple way to put these ideas into practice. DIKUW charts combine Sanders’s (2015) sensemaking framework with Basadur’s challenge mapping (Basadur et al., 2000). DIKUW charts connect our key, critical questions with what we know and what we don’t know — and how we know, or how we can learn, those things. The goal of a DIKUW chart is to model our wisdom, understanding, knowledge, information, data, and the phenomena that we derive them from. Creating a DIKUW chart is therefore a way of practicing wisdom: of reflecting on what matters and what doesn’t, identifying the important gaps, and charting a path to resolving those gaps.

    Figure 1 provides a demonstrative sample of a DIKUW chart. This sample DIKUW was developed to inform a research project on data crowdsourcing. At the top of the chart are the theories we have (theory of classification, theory on conceptual modelling) and the theories we are trying to build (“how to design for crowdsourcing contributions in data crowdsourcing.”) It then breaks those theories down into components, all the way down to the phenomena that make up the problems and solutions we are concerned with.

    A sample DIKUW chart, adapted from Murphy and Parsons, 2020. Note that the important thing is not the exact segmentation of each level or the correct classification of each element into a given layer, but of outlining a clear path from the questions we need to answer to the phenomena that provide the answers.

    The process of developing a DIKUW chart is simple. However, it is also iterative. As you go through this process, work in pencil, or marker and sticky notes. Be willing to revise, move, remove, and add to the chart as it develops.

    Begin with a goal. What are you trying to achieve? Remember that while wisdom is about deciding what the best goal is, there is no perfect answer, here. Design is tentative and iterative. Start somewhere — anywhere — and give yourself permission to change later. In figure 1, the goal is designing for [better] crowdsourcing contributions in data crowdsourcing projects. Write that goal at the top of a chart.

    Then, ask: what understandings do you need to achieve that goal as efficiently as possible? Write these understandings down as separate elements on the chart and connect them to the goal you’ve identified. In our example, a key issue in crowdsourcing is the variety of contributors that might contribute to a given crowdsourcing project. Different contributors may have different skillsets or expertise or background that influence the qualities of data they are able to share with the project. So, the understanding highlighted in figure 1 is that crowdsourcing projects should account for this variance.

    Third, what knowledge can we use to inform that understanding? Connect these pieces of knowledge to the understandings you added previously. In the example above, we know that the interfaces people use to contribute to crowdsourcing projects come from the conceptual model of the project: how the project designers see the world, who they expect contributors to be, and how they expect them to be able to contribute. They design the contribution interface according to that conceptual model, so potential contributors who do not match the conceptual model of the project designers might run into friction when they try to contribute (e.g., they might not speak the same language as the interface, leading to mistakes). However, we don’t know what influences contributor motivation. This is where the DIKUW starts to become useful: by identifying what we don’t know, we learn what we need to learn.

    Fourth, what information provides the knowledge we just noted? In the example, this takes the form of the conclusions of scholarly studies — but this need not be the case. Information is simply data with meaning. Any input you can think of that confirms your knowledge is worth adding to the chart. So, add the pieces of information you can think of that contribute to the knowledge you identified before and connect them in the chart. At the same time think about what information would provide the knowledge you are looking for. Remember that the purpose of a DIKUW chart is not only to map what you know, but also to map what you need to figure out. Add these ideas and connect them as well.

    Fifth, what data drives that information? Label these on the map and connect them accordingly. In the example, experiments and surveys informed the scholarly studies we cited. There are many other kinds of data that could be useful, however. Reports, dashboards, or metrics in your organization, for instance, provide valuable sources of data that may be transformed into information in service of your developing theory.

    Sixth, what are the phenomena in the world that we turn into those data? Consumer behaviour? Products and services? Staff functions and roles? Anything goes, as long as it can be observed and turned into data. Add these phenomena.

    As you go through these steps, feel free to re-work the map. Does something you added as understanding actually fit better as knowledge? Move it! Are you thinking of new key questions as you work? Add them! Does something you’ve just added map to multiple parts of the chart? Sketch the relationships out. Did you just think of a new piece of data or information that might be useful, but you’re not sure how yet? Put it down and see what comes of it.

    The purpose of the chart is not actually the chart itself, but what you learn from the process. The goal of engaging in a DIKUW charting exercise is to reflect and reflect on your learning or your organization’s learning. The chart is a working model of what you know, how you know it, what you need to know, and how you can figure it out. So, remember that all models are wrong (Box, 1976) — but it is your responsibility to make sure that this model is useful.

    # References

    Ackoff, R. (1989). From data to wisdom. Journal of Applied Systems Analysis, 16, 3–9. http://www-public.imtbs-tsp.eu/~gibson/Teaching/Teaching-ReadingMaterial/Ackoff89.pdf

    Basadur, M., Potworowski, J. A., Pollice, N., & Fedorowicz, J. (2000). Increasing understanding of technology management through challenge mapping. Creativity and Innovation Management, 9(4), 245–258. https://doi.org/10.1111/1467-8691.00198

    Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791–799. https://doi.org/10.2307/2286841

    Murphy, R. J. A., & Parsons, J. (2020). What the crowd sources: A protocol for a contribution-centred systematic literature review of data crowdsourcing research. AMCIS 2020 Proceedings, 20, 6. https://core.ac.uk/download/pdf/326836069.pdf

    Sanders, E. B.-N. (2015). The fabric of design wisdom. Current, 06. https://current.ecuad.ca/the-fabric-of-design-wisdom

    Simon, H. A. (1995). Problem forming, problem finding and problem solving in design. Design & Systems, 245–257. http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=34208

  • Design Science and Information Systems

    Last updated Sep 15, 2024 | Originally published Jan 19, 2024

    # Design Science and Information Systems

    # Introduction

    Here, we explore the fundamental concepts and principles of design science, its application in information systems, and how it can be used to develop novel design theories. Design science is the combination of design (a discipline focused on using what we know to engage in effective, efficient, and ethical problem-solving) and science (a discipline focused on advancing what we know by iteratively and empirically solving problems). The interdiscipline of design science plays a crucial role in the development and implementation of information systems, helping organizations achieve their objectives efficiently and effectively.

    # What is design?

    When you hear that someone is a designer, what are your assumptions? What do you think they do? A common misconception about design is that design is about aesthetics. “Designer” fashion or furniture is a great example of this. So, often, when people talk about design, they are talking about the look of something. You might assume that a designer is a graphic designer, clothing designer, or product designer, and that their day-to-day work involves choosing colours, styles, layouts, and materials. Steve Jobs famously asserts that these assumptions are completely wrong:

    Design is not just what it looks like and feels like. Design is how it works.

    • Steve Jobs (1955–2011), co-founder and former CEO of Apple, as quoted by Walker, 2024

    Designers don’t just think about what something looks or feels like. Designers decide what a thing does. The word “design” is, in fact, rooted in the Latin “de-signô”: “I mark.” A designer designates what is important about a thing, and what is not. Every person, in fact, engages in design work — whether they know it or not. You decide why you’re reading this reading: what is important about this experience for you? What is not? You decide what your goals are. In turn, you decide what your day should look like to help you achieve those goals. So, you are a designer, I am a designer, and we are all designers.

    Of course, the designers developing information systems such as your employer’s Enterprise Resource Planning suite, the latest social media platform, or an e-commerce website must make different kinds of design decisions than the ones we all make every day. Designing information systems at this scale means appreciating that you are not the only stakeholder or user of the systems you’re designing. More crucially, it means understanding that the system you’re designing doesn’t really matter — those users and stakeholders do (Keil & Mark, 1994). Thus, most designers engage in an iterative design process that looks like this:

    1. Work with users/stakeholders to deeply discover the problems that the design must solve (and who exactly it must solve them for);
    2. Define the exact parameters, constraints, objectives, and success conditions for solutions for these problems for those users/stakeholders;
    3. Use feedback from tests to develop possible solutions and improve upon them; and
    4. Deliver, implement, scale, and sustain the new design.

    That design process has variously been visualized as a “double diamond” (figure 1) or the “design squiggle” (figure 2).

    The UK Design Council’s “double diamond” of design. The “Double Diamond” of design shows how design often follows two patterns of divergence and converge. From https://www.designcouncil.org.uk/our-resources/the-double-diamond/

    The “Design Squiggle,” from https://thedesignsquiggle.com/about. The “design squiggle” famously illustrates the messiness of the design process.

    Design decisions using this process range from major (“what does this policy do, and for whom?”) to minor (“do we make this function accessible via a menu or buttons?”) The iterative nature of this process is important. No design should ever be implemented without testing it, and using the feedback from that testing to improve the design. Repeatedly moving through this process — iterating — is how that feedback cycle drives progress.

    # Design science

    Design science is a discipline combining design with the scientific method. As described by Hevner et al. (2004, it is a paradigm that harnesses our instinctive designing ability and formalizes it with scientific rigor. With roots in engineering and the sciences, design science adds structure to the discipline of design. As we will discuss, for instance, design science separates the ideas behind a design (its design principles and theory) from the ways in which we use those ideas (the designed tool itself, usually called “artifacts” or “instantiations.”) Design science further applies empirical methods to design processes and challenges, with the goal of producing reliable, reproducible, and valid designs as outcomes of the design science process.

    Design science plays a significant role in Management Information Systems (MIS). It offers a structured method to create innovative IT artifacts — products and processes — that are intended to meet specific organizational or business goals (Gregor & Hevner, 2013). Given the dynamic nature of MIS – marked by rapid technological advancements, diverse user requirements, and shifting business environments – the need for constant design and redesign is paramount. Thus, design science provides an essential framework for developing and refining MIS models, tools, and theories.

    # Design principles

    This section introduces the fundamental building-block of design science: the design principle. Effectively designing, defining, developing, testing, and applying design principles is fundamental to any successful design. This section will help you understand what design principles are, their significance, how they can be identified across various domains, and the role they play in enhancing system performance.

    Design principles can be thought of as the guidelines or rules followed in the creation or modification of a design artifact. They are prescriptive statements that indicate how to do something to achieve a goal (Gregor et al., 2020). Design principles provide guidance to people developing solutions to particular classes of problems, providing detailed behavioral prescriptions that establish how the related artifacts should be constructed or used (Gregor & Jones, 2007). For example, in visual design, principles like balance, contrast, and unity guide aesthetic decisions. For a trivial example, “do not make the date or location of the event small or hard to see” is a very important design principle in the design of an event poster. An example from designing information systems might be “Provide user interfaces that allow the user to precisely control the system.” This design principle would guide a designer away from counterintuitive designs, such as a volume controller on a slide that requires the user to tilt the controller and wait for the slider to fall “down” to the appropriate volume level (figure 3), or a volume controller that does not give the user exact feedback about the level of volume of the device (e.g., Apple’s iOS Control Center volume control, introduced in iOS 7; figure 4).

    A volume controller that works by tilting the UI element until the slider has slid to the desired volume level (from https://uxdesign.cc/the-worst-volume-control-ui-in-the-world-60713dc86950)

    iOS 10’s Control Center showing the volume slider on the right side (from https://uxdesign.cc/the-worst-volume-control-ui-in-the-world-60713dc86950).

    The identification and definition of design principles provides an important middle ground between understanding a problem and its potential solutions and actually delivering and implementing a solution to the problem. Imagine a company aiming to develop an innovative online learning platform to address the problem of access to quality education in rural areas. In this scenario, the identified problem is the inadequacy of current solutions in delivering high-quality education to the remote regions. There are barriers such as inconsistent internet connectivity, limited resources, diverse learner profiles, and varied learning environments to consider. We might assume the company understands this problem and potential solutions thoroughly. Nonetheless, developing the platform without clearly defining design principles that address the key challenges of the platform’s users would be like trying to travel through uncharted terrain without a map. Just like you might head in the right direction through sheer luck, the company might invent a design that solves all of its users’ problems immediately. More likely, though, the company will get more things wrong than it gets right when it first begins. To navigate this uncharted terrain effectively, designers at the company need to map their understanding of the problem and their users to testable design principles, and then test and use those principles as guidelines to steer the design process. These principles could include:

    • Inclusivity: design the platform so that users with limited resources and abilities are not disadvantaged.
    • Simplicity: Ensure the platform is easy to use such that any major action taken requires a maximum of four steps after logging in.
    • Flexibility: Allow users to follow adaptive learning paths, engaging in content and assessment in the order that best suits their needs and abilities.
    • Resource Efficiency: Optimize the technical requirements of using the platform so that it remains performant even when users networks have low Internet bandwidth and their devices have limited processing power and memory.

    These principles provide direction and useful constraints for the platform designers, encouraging them towards solutions that will be celebrated by their users while preventing them from implementing ideas that disparage their needs. Design principles provide wayfinding and guardrails for designers on the road between finding a problem and solving it effectively, efficiently, and ethically.

    Understanding design principles is a core aspect of leveraging design science, particularly in the development and management of information systems. The itemized, atomic nature of design principles is particularly important in design science. This allows designers to decompose a design into its component principles. This, in turn, allows designers to manipulate only some of these parts at once, facilitating the evaluation of specific aspects of the design while controlling the rest. The itemized, atomic nature of design principles is particularly important in design science. This means that each principle can exist independently and is defined distinctly. Such a modular approach to principles allows for decomposing or breaking down a complex design into its constituent principles, much like dismantling a complex machine into its individual components.

    In conclusion, design principles offer crucial prescriptions for the design process. They provide clear, actionable, and theoretically grounded advice that direct designers towards the outcomes of their design goals. Further, they help break down the complexity of designs into testable components, facilitating the design science process.

    # Designs and what they make: Differentiating between design theories and design artifacts

    At this point, you may be noticing the difference between a design and the objects we make with designs. This represents the essential duality of design science, and an important distinction: we use designs to make things. Designs themselves are fundamentally theories of the best way to do or make something. Designs are therefore fully represented by design theories: design principles and a number of other supporting ideas about those design principles (Gregor & Jones, 2007). Objects are design artifacts (or “instantiations”), the result of applying those design ideas to create or do tangible or intangible things (Hevner et al., 2004).

    We can make this idea obvious with a simple example. Consider the following design principles for a chair:

    1. Use ultralightweight materials to make the chair easy to lift.
    2. Allow the chair to nest on top of other chairs so that they may be stacked.
    3. Provide a wide seat and movable arms so that the chair is comfortable for people of different shapes and sizes.

    Now, imagine the chair that would result from applying those design principles. Easy, right? Yet, the chair you imagine and the chair that I imagine are undoubtedly different. Each person applying these design principles will instantiate the design in different ways. (Instantiation validity is the degree to which a given artifact stays true to its design; Lukyanenko & Parsons, 2013)

    This distinction between design theory and designed object facilitates several things. First, designs are more than just their design principles. A design theory provides the purpose of a design, all of the principles that guide its creation/manifestation/implementation/application, the basis of those principles, tests of the effectiveness of the resulting artifacts, and more. Design theories are therefore amalgamations of knowledge, principles, and other information that guide how to go about creating the artifact to solve a recurrent class of problems (Gregor & Jones, 2007). Second, design artifacts or instantiations serve as physical proofs of concept for the design theories. These objects translate abstract design theories into actionable frameworks. By interacting with these objects in the real world, users can verify the efficacy of the design, giving credibility to the design theory. Third, not all designs work as intended. Design artifacts are therefore not only proofs of concept, but feedback mechanisms. This allows for the examination and modification of a design without necessarily impacting the existing artifacts or instantiations. Just as an architect may amend a building plan for future buildings without physically altering the construction of the existing one, designers can revise their design theories to optimize future outcomes without directly changing the objects already produced. We can consider each instantiation as a tangible case study, offering insights into the design’s effectiveness. The successes or shortcomings observed in existing objects can feedback into the redesigning of the theories, leading to continual improvement in the designs themselves. This cycle is the design science research cycle, and we will discuss it further in a later section.

    # Developing design theories

    As explained in the preceding section, a design theory is a complete articulation of a design (Gregor & Jones, 2007). A simplification of Gregor and Jones’s (2007) anatomy of an ISDT is provided below consisting of eight components:

    • Purpose or problem: the exact issue the design aims to address/improve upon
      • Success conditions: the conditions that must be true for us to know that this problem is “solved”
    • Context: the domain of the problem at hand
    • Users: who will be working with the results of the design, and what they’re trying to do
    • Basis: the argument for the design; the evidence the designer draws from to assert that their design can and will work to achieve the purpose or problem for the users in this context
    • Principles: the core, basic ideas that the design is leveraging for its solution, composed of constructs, affordances, and tests
      • Constructs: the features, mechanisms, tools, and other artifacts that are the objects of the design
      • Affordances: what use of the artifacts does for the user to implement the principles and address the purpose/problem
      • Success conditions: what conditions must be true if the constructs successfully fulfill the affordances
      • Tests: evaluations that the designer and/or user can carry out to assess if the success conditions are met

    To develop a novel design theory for a given problem, start from the first component, and then work through the rest in sequence. Note that design principles are therefore derived from some basis: the justificatory kernel theories drawn from experience or expertise that indicates the potential of the design principles. It can be helpful to present design principles in tables of the following form:

    Table 1. Presenting design principles.

    Principle Constructs Affordances Success conditions Tests
    A prescriptive statement that indicates how to do something to solve a problem or achieve a goal Features or functions of the designed artifact that instantiate the design principle The beneficial allowances that result from the construct, in terms of the principle Conditions that must be true if the construct’s affordance fulfills the design principle How to assess whether the success conditions have been satisfied

    # Example of a design theory

    In this section, we will demonstrate the development of a design theory by adapting the Technology Acceptance Model (TAM) developed by Davis (1989) into the components described above. TAM is a widely-cited design theory in Management Information Systems that explains users’ acceptance and usage behavior towards technology.

    • Purpose or Problem: TAM was developed to understand why users accept or reject information technology and what factors influence users’ technology acceptance.
    • Success Conditions: TAM is successful when it accurately predicts or explains an individual’s acceptance of an information technology or system. It should be able to guide system design such that it appeals to user acceptance factors.
    • Context: TAM is applicable in various contexts where individuals interact with technology or are introduced to new technological systems or tools.
    • Users: Typically, the users of TAM as a design theory are IS designers, system analysts, organizations introducing new technologies, and researchers examining technology adoption and usage behavior.
    • Basis: TAM is grounded in Fishbein and Azjen’s (1975) theory of reasoned action and extended to accommodate the acceptance of information systems (Fishbein & Azjen, 1975). The theory of reasoned action is supported by numerous empirical validations and serves as a theoretical foundation for subsequent research proposing extended and modified versions.
    • Principles: TAM suggests that Perceived Usefulness (PU) and Perceived Ease of Use (PEU) determine an individual’s intention to use a system. These principles are developed into constructs, affordances, success conditions, and tests in the table below.

    Table 2. Design principles for the Technology Acceptance Model (Davis, 1989)

    Principle Constructs Affordances Success Conditions Tests
    Ensure users perceive the technology as useful Features that allow the user to accomplish things they were not otherwise capable of Helps the user see how the technology enhances their capabilities System is recognized as beneficial to tasks or work User survey on perceived benefits
    Features that directly address user needs Helps the user associate the technology with their personal objectives and tasks Features are relevant to end-users and solve their problems Task completion tests; user feedback on feature usefulness
    Ensure users feel that the technology is easy to use Intuitive user interface Users are able to immediately use the technology without experiencing confusion or unexpected results Users can interact with the system easily Usability testing; user feedback on interaction ease
    Instructional guides or help features within the system Users feel capable of teaching themselves how to use the technology to the fullest Users improve their own ability to use the technology over time Longitudinal user feedback on technology utility; user support requests data

    In essence, TAM posits that a system is more likely to be accepted and used by individuals if they perceive it to be useful and easy to use.

    # The Science in and of Design Science

    Now that we have a deep understanding of how we develop design theory, let’s explore how we combine design with science to study what we make. In this section, you will learn how the scientific method is applied to test designs, review the design science research process, and see how this is put into practice via a hypothetical example.

    If you’re familiar with the philosophy, processes, and practices of science, the “science” of design science is actually quite straightforward. Design science simply employs the scientific method to validate design theory. In other words, it essentially follows the scientific cycle of making an observation, developing a hypothesis that explains that observation, empirically testing that hypothesis, then making more observations based on our tests. This is the fundamental rigorous cycle of scientific knowledge. What makes design science unique from the other sciences, however, is that it involves two additional cycles: a design cycle and a relevance cycle (figure 5). For our current purposes, the details of these cycles do not matter. Simply note that these interlocking cycles drive advancement in three places at once:

    1. Through the rigour cycle, we develop a better understanding of the world, particularly about the effects and natures of technology.
    2. Through the design cycle, we build better products, processes, and programs (design artifacts) and improve our ability to evaluate their efficacy.
    3. Through the relevance cycle, we improve our understanding of problems and opportunities and help to make progress on both.

    The Three-Cycle View of Design Science Research, adapted from Hevner, 2007.

    To conduct design science, Peffers et al. (2007) offer a robust Design Science Research Methodology (DSRM; figure 6). The DSRM is a commonly used structure for creating and evaluating constructs and artifacts within the Information Systems discipline. It consists of six key activities, which can be executed in a largely sequential fashion but also allows iterative feedback loops:

    1. Problem Identification and Motivation: Define the research problem and substantiate the relevance of the solution. A comprehensive understanding of the problem is developed through landscape review, gap identification, and insight into why and how an existing problem affects stakeholders. (This is where a design theory’s problem/purpose, context, and users might be defined.)
    2. Objectives Definition for a Solution: Infer the objectives of a solution based on the identified problem. The solution’s requirements and effects are established, clarifying the success conditions for the resulting designed artifacts.
    3. Design and Development: Create an artifact for a specific problem. This is where design principles and their constructs, affordances, success conditions, and tests would be defined.
    4. Demonstration: Demonstrate the use of the artifact in a suitable context. This phase involves use-case scenarios, simulations, or detailed examples to explain how the designed artifact can be applied to solve the stated problem. This stage is our opportunity to see if our design results in artifacts that satisfy the success conditions using evaluation (the next step in the DSRM).
    5. Evaluation: Evaluate the artifact. The artifact’s performance in solving the problem is evaluated using suitable methods as specified in the design theory and principles, which may be observational, experimental, analytical, testing, or descriptive, depending upon the research context and the artifact itself.
    6. Communication: Communicate the problem, the artifact, the novelty, the usefulness, and the effectiveness to audiences. This involves consolidating and disseminating research outcomes for its intended audience— it could include academic peers, as well as industry practitioners.

    Peffers et al. (2007) suggest that researchers may enter this process at different stages depending on the nature of the research, using what they call “entry points”. The research activities are not strictly sequential but typically recur iteratively as the research progresses, allowing the researcher to loop back to previous stages based on the findings at a current stage.

    The Design Science Research Methodology (DSRM) process model, from Peffers et al., 2007.

    # An example of the Design Science Research Methodology

    Here we explore a example of the DSRM. Consider the following hypothetical example focused on a manufacturing company..

    1. Problem Identification and Motivation: Suppose that inefficiencies in resource allocation within a manufacturing company are causing increased costs, production delays, and waste. Changes to production plans and simple errors both cause schedules to shift unpredictably in mid-production. This means that the company must become better at dynamically adapting the manufacturing schedule.
    2. Objectives Definition for a Solution: The company needs to become better at adapting the manufacturing schedule in realtime to keep up with changes and unexpected issues. A new design will be successful if changes can be made to the schedule in the middle of operations such that those changes and their consequences are appropriately propogated downstream to all aspects of production affected by the change.
    3. Design and Development: Given the above problem and objectives, the design team begins by exploring the basis for potential solutions to this challenge. They know that recent advancements in data analytics, machine learning, and AI have led to algorithms that are effective at modelling complex problems and, given new input data, can adapt the model to suit. So, the team develops design principles focused on a model-based adaptive scheduling algorithm that can draw on realtime data to maintain a model of the production schedule and adapt it to inputted changes, dynamically scheduling resources to minimize costs and delays.
    4. Demonstration: The AI-based scheduling tool is put to use in a simulated environment reflecting real-world conditions and constraints of a production factory. Various scenarios are run to assess the performance of the AI tool across a range of situations, including both regular and high-stress (rush orders, limited resource availability, etc.) production cycles.
    5. Evaluation: Using a test dataset reflecting both regular and outlier conditions, the performance of the AI tool is evaluated by comparing it against manual scheduling outcomes and outputs from traditional non-AI scheduling software. Common evaluation metrics could include fulfillment timeline, cost efficiency, and resource utilization rate. The results of these evaluations are used to validate our initial hypotheses.
    6. Communication: If the AI scheduling tool is successful in significantly improving resource allocation efficiency, the results would be disseminated in a stream of research papers. These would detail the problem, the tool design, the tests performed, the evaluation methodologies, and the resulting impact of the AI tool. This information would be invaluable to both academics studying AI applications and industry practitioners in manufacturing.

    This hypothetical project demonstrates how the development of a design theory follows DSRM process, thus ensuring a comprehensive and methodical approach to problem-solving that contributes to rigour, design, and relevance.

    # The impact and ethics of design in Management Information Systems

    There are professions more harmful than industrial design, but only a very few of them

    • Victor Papanek (1923-1998), advocate for socially and ecologically responsible design (Papanek, 1997)

    Imagine a company developing an MIS to optimize its hiring processes. The system incorporates algorithms intended to screen candidates rapidly. However, the algorithm is inadvertently biased against candidates from certain demographics, leading to discriminatory hiring practices. This scenario unfolds silently, masked by the perceived impartiality of technology. The repercussions extend beyond individual missed opportunities; they perpetuate systemic inequalities and erode trust in technology as a tool for fairness. This hypothetical example illustrates the potential consequences of neglecting ethical considerations in MIS design—a failure that can reinforce societal disparities and undermine ethical standards.

    At this point, it is worth underscoring that every decision is a design decision. In Management Information Systems, that means that every decision should be guided by design to encourage the usability, accessibility, efficiency, and functionality of our information systems. The issue with this is that for many kinds of decisions, our design theories are implicit. That means that we have not put in the work to understand, articulate, and make tangible the problems we’re solving, who we’re solving them for, the context we’re working within, what success looks like, or what ideas underpin the success of our designs — let alone how we know if those ideas are being properly implemented. Unfortunately, the consequences of bad design are like an iceberg. Some of the challenges caused by bad design (for instance, a poorly-performing system that is hard to use) will be easy to see. However, as technology is increasingly and intractably woven into the fabric of business and society, some of the most significant ramifications of poorly-designed systems are now hidden beneath the surface of these tools. These decisions dictate how systems interact with users, influence organizational processes, and potentially affect broader societal norms. Given the ubiquitous integration of MIS in daily operations and personal lives, these systems can shape behaviors, protect or endanger privacy, and foster or hinder inclusivity. The weight of these decisions places a moral imperative on designers and developers to recognize and wield their power responsibly. As a result, good design is not only paramount for making effective and efficient systems — it is also essential for making ethical systems.

    In the book Ruined by Design, Monteiro (2019) argues that designers are not neutral parties in the creation of technology; they are gatekeepers of ethics and morality. Design in MIS is not merely about creating efficient systems; it’s about making decisions that affect users’ lives, data privacy, and societal norms. Monteiro emphasizes that every design choice wields power and influence, embedding within it the potential for significant societal impact. This underscores a multifold responsibility: To not only fulfill business objectives, but also to safeguard user rights, to anticipate long-term systemic effects of changes to technology, and to advocate for positive change. Remember the key lesson at the beginning of this module: we are all designers. Thus, as professionals working with MIS, we must extend our purview beyond using data and systems to consider the ethical implications of our work. This includes respecting user privacy, ensuring data security, and actively resisting features that could exploit or harm users or other stakeholders.

    To navigate the ethical complexity of design, adopting a structured framework is essential. Monteiro (2019, chapter 1) advocates adopting the following designer’s code of ethics:

    • A designer is first and foremost a human being.
    • A designer is responsible for the work they put into the world.
    • A designer values impact over form.
    • A designer owes the people who hire them not just their labor, but their counsel.
    • A designer welcomes criticism.
    • A designer strives to know their audience.
    • A designer does not believe in edge cases.
    • A designer is part of a professional community.
    • A designer welcomes a diverse and competitive field.
    • A designer takes time for self-reflection.

    By adopting such a code of ethics, we are challenged to consider the potential ramifications of our designs, making decisions that stretch beyond functionality or aesthetics, but also addressing issues of privacy, accessibility, sustainability and safety. By encouraging criticism, user-centeredness, professionalism, diversity, and self-reflection, the code of ethics also gives us mechanisms and behaviours to ensure we can support and protect this ethical orientation.

    Nonetheless, the challenge of implementing these ethical foundations in the real world is significant. It requires:

    • Education and Awareness: Professionals must be educated on the ethical implications of MIS design, including ongoing training and development.
    • Ethical Leadership: Organizations need leaders who prioritize ethical considerations in strategic decisions and who can serve as ethical role models.
    • Policies and Governance: Robust policies and governance structures should be established to enforce ethical practices in design of MIS.
    • User and Stakeholder Advocacy: Establishing roles or teams dedicated to human rights and ethical design should ensure that user and stakeholder welfare is a priority in system design and implementation.

    In an era where technology shapes almost every aspect of our lives, the ethical considerations of MIS design are paramount. It is clear that the task of creating ethical systems is complex but critically important We, as professionals and designers in this field, have a profound responsibility. We are not only designing systems; we are designing the future experiences of everyone who interacts with or is affected by those systems.

    # Conclusion

    In this note, you explored the design of information systems, learning to develop design principles and design theories, to separate design from designed artifact, and to study the effectiveness of our designs via design science research methods.

    Design is fundamentally a process of decision-making, a means of knowing what holds importance and what does not. It is through the act of ‘designation’ that we make design decisions which lead to the creation of valuable artifacts. To design is an act of ‘deciding’ what matters. This point underscores the significance of MIS design decisions in shaping systems. Remember that every tool has an implicit design theory, and being explicit about these theories allows us to build, adopt, and use these tools more effectively, efficiently, and ethically.

    Design science builds on the idea of applying scientific methods to the realm of design, fostering a rigorous, evaluative, and iterative approach towards developing design theories and principles. The frameworks developed from exiting theories guide the formation of novel and useful artifacts, solutions to recurrent problems within their specific domains.

    Do not forget about the importance of ethical considerations in design. Design is not just about determining how to make the most performant systems. Design is also about appreciating the complex ramifications of the choices we make during the design process. Design ethics challenge us to be accountable for our design decisions and to anticipate their consequences.

    # Key points to remember

    1. To design is to designate; to mark. Design is about deciding what is important (and what is not).
    2. Design science is the application of the scientific method to design theories, principles, and artifacts.
    3. All tools have design theories, though they may be implicit.
    4. You have and use design theories, though they may be implicit.
    5. Being explicit about our design theories helps us to build, adopt, and use tools more effectively.

    # References

    Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008

    Dresch, A., Lacerda, D. P., & Antunes, J. A. V. (2015). Design science research. In A. Dresch, D. P. Lacerda, & J. A. V. Antunes Jr (Eds.), Design science research: A method for science and technology advancement (pp. 67–102). Springer International Publishing. https://doi.org/10.1007/978-3-319-07374-3_4

    Fishbein, M., & Azjen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Addison-Wesley.

    Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. MIS Quarterly, 37(2), 337–A6. https://www.jstor.org/stable/43825912

    Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 25. https://aisel.aisnet.org/jais/vol8/iss5/19/

    Gregor, S., Kruse, L., & Seidel, S. (2020). Research perspectives: The anatomy of a design principle. Journal of the Association for Information Systems JAIS, 21, 1622–1652. https://doi.org/10.17705/1jais.00649

    Hevner, A. R. (2007). A three cycle view of design science research. Scandinavian Journal of Information Systems, 19(2), 87–92.

    Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105. https://www.jstor.org/stable/25148625#metadata_info_tab_contents

    Keil, M. L. M., & Mark. (1994). If we build it, they will come: Designing information systems that people want to use. MIT Sloan Management Review, 35(4), 11–25. https://sloanreview.mit.edu/article/if-we-build-it-they-will-come-designing-information-systems-that-people-want-to-use

    Lukyanenko, R., & Parsons, J. (2013). Reconciling theories with design choices in design science research (pp. 165–180). Springer, Berlin, Heidelberg. https://link.springer.com/chapter/10.1007/978-3-642-38827-9_12

    Monteiro, M. (2019). Ruined by design: How designers destroyed the world, and what we can do to fix it (p. 221). Independently published.

    Papanek, V. J. (1997). Design for human scale. Van Nostrand Reinhold.

    Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45–77. https://www.tandfonline.com/doi/abs/10.2753/MIS0742-1222240302

    Walker, R. (2024). The guts of a new machine. The New York Times, 78. https://web.archive.org/web/20240204042142/https://www.nytimes.com/2003/11/30/magazine/the-guts-of-a-new-machine.html

  • Data, Information, Knowledge, Understanding, and Wisdom

    Last updated Sep 15, 2024 | Originally published Jan 18, 2024

    t

    # Data, Information, Knowledge, Understanding, and Wisdom

    # Why does data matter?

    Before we discuss what data is and how it is used, we should strive to understand why data matters.

    The Data, Information, Knowledge, Understanding, and Wisdom hierarchy is a simple mental model useful in appreciating the role of data in management and innovation. This hierarchy was first formally developed by Russ Ackoff, a forerunner in the study of management information systems. Ackoff described the hierarchy in a 1988 address to the International Society for General Systems Research, and it was reproduced by the Journal of Applied Systems Analysis in 1989 (Ackoff, 1989). There have since been many interpretations and iterations of the hierarchy (usually in the form of “DIKW”). In this article, I present my own, in which I strive to present the hierarchy as pragmatically as possible. You will see how the DIKUW hierarchy is a fundamental paradigm underpinning how we manage and innovate with data.

    There are actually six levels to the hierarchy. It begins with phenomena: things that exist and are happening in the world. We turn phenomena into data by observing and capturing that observation. We turn data into information by adding context to it. We turn information into knowledge by applying the information to something. We turn knowledge into understanding by critiquing the knowledge, diagnosing and prescribing problems with our knowledge in order to learn. Finally, we turn understanding into wisdom by formalizing our understandings in the form of theories. In the next section we discuss each of these levels in more detail.

    # Phenomena

    Before we can have data, we must have something to represent with data. That something is phenomena: things that are happening in the world. In fact, the world itself is made of phenomena. Phenomena are the material, concrete stuff affecting and making up the world.1 When we observe phenomena, they become something we can think about: data. In other words, underlying all wisdom, understanding, knowledge, information, and data, are the phenomena we are trying to observe and capture.

    To make this more concrete, consider buying a coffee at a café. Almost certainly, that café has an information system of some kind supporting its business. When you purchase that coffee, the information system models at least one kind of phenomena: money. So what kind of data may be generated about that transaction? The value of the sale, the proportion going to taxes, and how the value of the sale adds to the café’s revenues for the day. If you have a membership or rewards card for the café, the information system also models a different phenomenon: you. It registers that you (or at least, someone who has your rewards card, or who knows your rewards number) made a purchase. It probably knows what that purchase was, and associates the kind of coffee you bought with its model of you. All of these phenomena are observed and potentially captured by the café.

    Reflect: what other kinds of phenomena are interacting with the café? (I.e., what else might people do while they are there?) Which of these might be interesting for the business of the café?

    # Data

    When we observe something about the world, and especially when we record that observation (in the form of a note, or an image, or a value in a spreadsheet or database), we capture phenomena in the form of data. When a café’s information system observes and captures records of you and your purchases, it is creating data to represent those phenomena. Data helps us ask the simplest of questions about phenomena, even if we didn’t observe them directly. What has happened? When and where did it happen? Who was involved? For instance, an analyst working for the cafe can use the records created by its information system to see how many coffees were purchased in the past month without having to be present to observe each purchase themselves. This is a simple and obvious idea, but it is incredibly powerful.

    You might note at this point that this observing and capturing need not depend on a computer system. Indeed, workers at the café could simply write down each purchase, its value, and the number on your rewards card. An analyst later could conceivably look back through written ledgers to re-observe your purchase. What computers do is make it much easier for someone to consume data: to use the data to do something.

    After all, as Palmer (2006) observes, “data is the new oil”. Technological and methodological innovations of the past several decades have turned data into an invaluable resource that many prize as the ultimate modern good. However, Palmer also notes data’s important caveat: it is not useful by itself. Just like crude oil, data needs to be refined in order to be useful. Computers help us to query (that is, to look through) and refine data in order to use it.

    # Information

    When we first review data, we add context. We relate pieces of data with one another, including metadata (data about data, such as how the data was captured) and our own perspective (such as why we’re reviewing the data to begin with, our assumptions about the data, or other things that we’re relating the data to in our minds).2 In doing so, we create information. Information is data imbued with meaning (Bellinger et al., 2004) — and information is useful.

    Information helps us ask basic questions about the present, current world. What is happening? When and where does it happen? Who is involved? Most importantly: what happens if we do this or that? As a result, we can combine and compare information to produce patterns (if that happens, then this happens) and to find outliers.

    Consider the café example again. A simple question the café might have is “what products are the least popular?” Since the café has been recording each purchase made, this question is answered with a simple query of the purchases data. An analyst using this data would be able to quickly return a ranked ordering of all products by volume. This information could be very useful for a café looking to simplify its operations or cut down on costs.

    # Knowledge

    Knowledge is produced when we apply information, especially when we develop ways to apply information systematically (i.e., when we can create instructions for others to act on similar information in similar ways to produce similar results). For instance, when we find a pattern (if that happens, then this happens) we can say (if that happens again, we should expect this, and therefore do [something]). Or, if we do this, then that will happen. Knowledge therefore helps us to start asking the most important kinds of questions: questions about futures.3 What will happen? When and where will it happen? Who will be involved? What if we do this or that?

    With a ranked list of product purchases by volume, it would be easy for a café manager looking to cut costs to identify which purchases to remove from service. They can ask the question, “What would happen if we removed the three least popular products from our product lines?” and answer it with “It would have a negligible effect on revenues.”

    # Understanding

    Understanding is produced when we develop knowledge about knowledge. As we interact with knowledge, we begin to detect gaps, problems, and errors. For instance, if we expected this to happen, but that happened instead. Understanding is therefore diagnostic: it allows us to detect, identify, and investigate problems and errors. It is also prescriptive: it allows us to identify, specify, and theorize about what we now need to know (Ackoff, 1989).
    For instance, through understanding, we develop the best kinds of questions to ask about a given problem, the most valuable types of tests to try on a new idea before we implement it. We begin to know what we don’t know — and what we need to know.

    Understanding is therefore the difference between learning and memorizing. It is simple enough to memorize a set of instructions (i.e., to memorize knowledge). Some instructions are simple. How to tie a set of shoelaces is a good example. To solve the problem of tying your shoelaces, it is sufficient to have the knowledge of how to tie a set of shoelaces. You can then apply those exact same instructions to every set of laced shoes you will ever encounter. No further knowledge is necessary. Other instructions, however, are complicated: consider launching a rocket to geospatial orbit. The instructions to do so were quite tricky to figure out. As a civilization, we have gotten fairly good at launching rockets, but even now, some of our rocket launches nonetheless fail. In these complicated problems, memorization is insufficient. This is where the diagnostic and prescriptive role of understanding becomes important. To solve complicated problems, we must not only apply previously-memorized knowledge, but also learn how to seek new knowledge in order to make progress. Thus, we must be able to detect, identify, and investigate gaps and errors in our knowledge — and to specify and theorize about what kinds of knowledge we need to obtain in order to resolve those gaps and errors.

    To illustrate the value of understanding, return again to the hypothetical café. You may have thought that there are other kinds of questions to ask: who purchases those products? When, and why? Perhaps the least frequently purchased products is a milk steamer, bought by parents of young kids who are also buying a number of other products each transation. If the steamer were removed from the menu, maybe those parents would take all of their other purchases elsewhere. Thus, a system tracking not only products purchased but also who purchases them may be used by an analyst to generate higher-quality knowledge from more information. That analyst, however, needs to have an understanding of the phenomena of the domain — that is, the relationship between different kinds of customers and different kinds of products — and must use that understanding to critically question the knowledge they are seeking from the café’s information system.

    # Wisdom

    Ackoff (1989) notes that data, information, knowledge, and understanding help us increase efficiency. That is, given a particular predetermined goal, we use data, information, knowledge, and understanding to attain that goal, and to attain it more predictably and with fewer resources (time, money, or whatever). In other words, these are operational layers. With more phenomena, more data, more information, more knowledge, and more understanding, we can do more, better, faster, and more easily. However, how do we judge what we should be doing and how we should be doing it? Ackoff (1989) argues that effective judgment comes from wisdom: the development and application of values. He therefore places wisdom at the “top” of the DIKUW hierarchy. However, Ackoff (1989) does not describe where wisdom comes from nor how it is developed with respect to phenomena, data, information, knowledge, or understanding. For that, we turn to the discipline of design.

    To design is to designate; to mark or give status to something, in order to decide what is important about a given thing. In other words, deciding upon values is a design decision. Liz Sanders, a scholar of design and design research, argues that the transformation of data into information, knowledge, and understanding is an analytical process: that means it involves investigating and breaking down the things that we’ve observed (Sanders, 2015). Sanders explains that wisdom guides the analysis of data through the mechanism of theory: our higher-level explanations, predictions, and prescriptions of how the phenomena we are interested in work (Gregor, 2006). For instance, our theory of our café’s operations might include ideas like “more product lines mean more effort involved in making them” and “fewer purchases of a product indicate that a product is not a valuable offering”. Similarly, when we organize and make sense of data, information, knowledge, and understanding, we synthesize and generate new theory — and therefore build up our wisdom about the phenomena we are interested in. So, when the café’s managers apply the ideas mentioned immediately above to the information that milk steamers are low-frequency purchases, they may conclude with a prescription: cut milk steamers from the menu in order to reduce operational complexity and cut costs while having minimal impact on the café’s ability to provide value to its customers. So, that is where wisdom comes from: building up and formalizing our understandings about phenomena into theories that explain, predict, and prescribe those phenomena.

    # So why does data matter?

    Crucially, phenomena happen whether or not we observe them. A coffee company’s regular customers may start feeling less satisfied about their newly-purchased beans whether or not the company is seeking feedback on their latest roast. Similarly, data means nothing unless we add context to it. Worsening reviews of the coffee company’s latest roast becomes useful data when we combine it with the realization that something about the roasting process has changed. Likewise, information is useless unless we apply it. If the coffee company’s customers are liking the latest roast less, maybe their blend of coffee beans needs further fine-tuning. However, our knowledge might be wrong. Maybe it wasn’t the bean blend, but the roasting process, or a competitor’s latest light roast, or a change of season (coldbrew, anyone?). We must understand the problems we’re dealing with as deeply as possible — to recognize if they are even the right problems to solve. After all, if the coffee company’s regular customers like the latest roast less … but they are bringing in many new customers with the change in approach, maybe it is the right thing to do.

    The truth is that data matters, but only if we are collecting the right data. And even then, data doesn’t matter — not without information, knowledge, understanding, and wisdom.

    That, however, leads us to a more nuanced question: how do we know if we have the right data? In fact, in many contexts, it’s not a case of “right data” or “wrong data.” Instead, we think of data quality as a spectrum. Moreover, there is no one way of thinking about data quality. Therefore, we must consider different data qualities, or dimensions of data quality, and make judgements about which ones matter the most for whatever we’re trying to achieve.

    # From data quality to data qualities

    Wand and Wang (1996) surveyed a set of notable data quality dimensions, finding that the five most cited dimensions were accuracy, reliability, timeliness, relevance, and completeness. Data accuracy measures the degree to which data precisely captures the phenomena it is supposed to represent. For example, if a transaction processing system were to record a $4.95 transaction as $5, it would be imprecise. Reliability is related to accuracy — it refers to the degree to which data correctly represents the phenomena it was meant to capture. The $5 transaction previously mentioned is not necessarily incorrect, even if it is imprecise. Timeliness refers to the degree to which captured data is out of date. A count of a population — say, the number of polar bears living in the wild — is out of date as soon as a polar bear is born or dies. Relevance is relative to what the data is used for. A count of the population of polar bears living in captivity is largely irrelevant if a data user is wondering about the health of wild polar bear populations. Finally, completeness is the degree to which captured data fully represents a phenomenon. For instance, a transaction processing system may only capture money in and money out, but a more complete record of each transaction would include what was bought or sold, who bought it, the transaction type, and so on.

    However, while these five dimensions are perhaps generally the most important for information systems design and use, Mahanti (2019) has demonstrated that there may be many more dimensions that matter to a given data producer or data consumer. She has identified a total of 30 dimensions of data quality (Mahanti, 2019, p. 7) that are worth considering for any information systems. By definition, anyone using an information system will be collecting data and trying to use that data to produce information, knowledge, understanding, and possibly wisdom. So, they should ask themselves: which of these are the most important for my project? Then: how do I know if this data is good enough for my purposes? Finally: how might I make sure I improve my data on these dimensions in the future?

    # What about ChatGPT?

    The latest advancements in data capabilities — namely, ChatGPT and similar generative AI tools — are excellent examples underscoring the significance of the DIKUW hierarchy. These tools are perhaps the greatest example we have ever had of the power of data. Ask ChatGPT, Microsoft Copilot/Bing, or any of their competitors a question and you will generally get a sophisticated, conversational answer in response.

    These tools are probabilistic generators. This is how they work: given some input (a “prompt”), they return the most likely response to that input, based on the patterns found in their training data. Their responses are injected with a little bit of randomness (depending on their “temperature,” this can be more or less random) so that they rarely offer the exact same response to the exact same input. These tools are able to achieve this functionality because they are fundamentally built of data. Basically, researchers pointed very powerful computers at the vast amounts of data that now exists on the Internet and trained those computers to learn the patterns found in that data.

    Given a prompt, these tools can do wondrous things with data, transforming it into information and knowledge. However, these tools need understanding in the form of well-constructed prompts in order to produce the most useful information and knowledge. Moreover, they must be paired with a wise human in order to produce useful output, as they have no real concept of their own gaps and errors. Uncritical use of the output of these tools has had disastrous consequences for foolish (or at least unaware) users, as they will completely make up falsehoods while seeming absolutely confident that the output they’re producing is correct. (This is called “hallucinations.”)

    This is where the DIKUW hierarchy must be re-emphasized: the quality of output of these probablistic generators varies greatly depending on the quality of input. Thus, users of ChatGPT and similar tools have learned that there are ways to artfully craft or even engineer prompts for these tools. We are now seeing the emergence of markets for high-quality prompts (e.g., https://promptbase.com/) and training for prompt engineering (e.g., https://github.blog/2023-07-17-prompt-engineering-guide-generative-ai-llms/). Working with “AI” has therefore become a modern skill that may grow in relevance and value as generative tools become more prominent.


    1. Note that this objectivity does not mean that things that are conventionally “subjective,” such as someone’s opinion about something, is not a phenomenon. Indeed, once someone has formed such an opinion, they have materialized it into a phenomenon that can be objectively observed (such as by a listener hearing the opinion) and captured as data. ↩︎

    2. This is particularly important in serendipity, as serendipitous discoveries are the result of both an observation and a valuable association of that observation with some other idea. ↩︎

    3. Bellinger, Castro, and Mills (2004) interpret Ackoff as saying that wisdom is the only dimension of the hierarchy concerned with the future (“Only the fifth category, wisdom, deals with the future because it incorporates vision and design”, para. 4). I disagree, however: Ackoff writes about how knowledge describes instruction, and instruction requires prediction, and therefore some expectation about how current actions will influence future results; similar for understanding. Moreover Bellinger et al. are inconsistent in this interpretation, as they later describe similar ideas about knowledge and prediction. ↩︎

  • 𖠫 The notion of data and its quality dimensions - Fox, Levitin, Redman - 1994

    Last updated Aug 16, 2024 | Originally published Aug 15, 2024

    The notion of data and its quality dimensions - Fox, Levitin, Redman - 1994

    Fox, C., Levitin, A., & Redman, T. (1994). The notion of data and its quality dimensions. Information Processing & Management, 30(1), 9-19. 10.1016/0306-4573(94)90020-5

    Fox, Leviton, and Redman presented one of the earliest fundamental conceptualizations of data and data quality. They argued that existing definitions suffered from flaws in either linguistic or usefulness criteria, and instead defined data as follows: Data is any collection of data items (or datum) that model the real world in terms of its entities, their attributes, and the values of those attributes, represented and recorded in some medium.

    They draw on Tsichritzis and Lochovsky’s work in Data Models, a 1982 book published by Prentice-Hall, where those authors defined datum “as a triple $<e, a, v>$ where the value $v$ is selected from the domain of the attribute $a$ to represent that attribute’s value for the entity $e$” (Fox et al., 1994, p. 12, paragraph 6].

    Fox, Leviton, and Redman (1994) note that the definition allows us to examine three sets of quality issues: model quality, data quality, and representation/recording quality. The latter is mostly the concern of database design and maintenance, but the former two sets may apply to my work on serendipity, as adopting this perspective allows us to separate dimensions of data quality from dimensions of model quality. Moreover, as shown in their table 2, reproduced below, this allows us to separate measures of datum quality from measures of database quality, too.

    Table 2. Quality dimensions for data values. Reproduced from (Fox et al., 1994, p. 17].

    Dimensions Target description Typical datum measure Typical database measure Related notions
    Accuracy Accurate or correct Size of error fraction incorrect precision, reliability
    Currentness current how far out-of-date fraction out-of-date age, timeliness
    Completeness complete Y/N fraction incomplete duplication
    Consistency consistent Y/N fraction inconsistent integrity

  • ∎ The notion of data and its quality dimensions - Fox, Levitin, Redman - 1994 Reading Session 202408151117

    Published Aug 15, 2024

    Annotations of The notion of data and its quality dimensions - Fox, Levitin, Redman - 1994 from 20240815 at 11:17

    The rapid proliferation of computer-based information systems has introduced an army of new and unsophisticated users to computers. Because they are often less well- trained, such users tend to feed systems erroneous data. Furthermore, inexperienced users are less able to recognize and deal with erroneous data than experienced users, and there- fore can be victimized by it. These problems drive system and process data quality improve- ments to reduce error rates in data input, data output, and data processing, and to make processes more tolerant of data errors. (p. 2)

    The authors here imply that there are dimensions of quality beyond data values, models, or representation: i.e., dataset qualities. Clearly this kind quality — the authors offer “usefulness”, though they don’t define what that means — is an emergent, systemic property of data model, values, and representation qualities.

  • ∎ Data quality in information systems - Brodie - 1980 Reading Session 202408151100

    Published Aug 15, 2024

    Annotations of Data quality in information systems - Brodie - 1980 from 20240815 at 11:00

    Data quality is a measure of the extent to which a database accurately represents the essential properties of the intended application. Data quality has three distinct components: data reliability, logical (or semantic) integrity, and physical integrity . (Data quality in information systems - Brodie - 1980, p. 3)

  • ∎ Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 Reading Session 202408141548

    Published Aug 14, 2024

    Annotations of Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 from 20240814 at 15:48

    Johnson describes how Darwin’s fundamental theory of evolution did not appear all at once, but instead shows up in Darwin’s personal notebooks in different ways for years before he finally recognizes and crystallizes his thinking into a full theory:

    It is simply hard to pinpoint exactly when Darwin had the idea, because the idea didn’t arrive in a flash; it drifted into his consciousness over time, in waves (Johnson, 2011, p. 81)

    I recall experiencing similar “waves” as I developed a few of my successful contributions to scholarship. Ideas show up over and over again. There is no one point where an idea forms, except in the retrospective we tell ourselves about the idea.

    This suggests the importance of the recursive systemic view. Ideas really are made up of smaller ideas, and those made up of smaller ideas. Some ideas come directly from other thinkers, while others come from observation of the world, while others come from creative abduction, and still others come from editing and feedback. These components all interact in a system and any one small change produces perturbations in the form — but if an idea is powerful enough, it will emerge, one way or another (“good moves in a design space”).

    Ideas are powerful enough when they fit a niche in the broader idea ecosystem — that is, they can draw on the right untapped or uncompeted-for resources, and they solve the right problems.

    We can track the evolution of Darwin’s ideas with such precision because he adhered to a rigorous practice of maintaining notebooks where he quoted other sources, improvised new ideas, interrogated and dismissed false leads, drew diagrams, and generally let his mind roam on the page. We can see Darwin’s ideas evolve because on some basic level the notebook platform creates a cultivating space for his hunches; it is not that the notebook is a mere transcription of the ideas, which are happening offstage somewhere in Darwin’s mind. Darwin was constantly rereading his notes, discovering new implica tions. His ideas emerge as a kind of duet between the present-tense thinking brain and all those past observations recorded on paper. Somewhere in the middle of the Indian Ocean, a train of association compels him to revisit his notes on the fauna of the Galápagos archi pelago from five months before. As he reads through his observations, a new thought begins to take shape in his mind, which provokes a whole new set of notes that will only make complete sense to Darwin two years later, after the Malthus episode. (Johnson, 2011, p. 83)

    This is a magnificent demonstration of a powerful knowledge innovation system and practice.

  • ∎ Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 - Reading Session 202408121021

    Last updated Aug 12, 2024 | Originally published Aug 12, 2024

    Annotations of 𖠫 Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 from 20240812 at 10:21

    Page 17

    both the city and the Web possess an undeniable track record at generating innovation.2 In the same way, the “myriad tiny architects” of Darwin’s coral reef create an environment where biological innovation can flourish. If we want to understand where good ideas come from, we have to put them in context. Darwin’s world-changing idea unfolded inside his brain, but think of all the environments and tools he needed to piece it together: a ship, an archipelago, a notebook, a library, a coral reef. Our thought shapes the spaces we inhabit, and our spaces return the favor. The argu ment of this book is that a series of shared properties and patterns recur again and again in unusually fertile environments. I have distilled them down into seven patterns, each one occupying a sep arate chapter. The more we embrace these patterns—in our private work habits and hobbies, in our office environments, in the design of new software tools—the better we will be at tapping our extraor dinary capacity for innovative thinking.

    Does the city and the reef follow TAP? If so, does the same pattern apply to e.g. conduits collected by Darwin? What does TAP and Johnson’s book agree upon?

    Page 18

    . In the lan guage of complexity theory, these patterns of innovation and cre ativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you’re looking at the original innovations of carbon-based life, or the explosion of new software tools on the Web, the same shapes keep turning up. When life gets creative, it has a tendency to grav itate toward certain recurring patterns, whether those patterns are emergent and self-organizing, or whether they are deliberately crafted by human agents.

    Constructal flow must apply here as well.

    Page 20

    When we look at the history of innova tion from the vantage point of the long zoom, what we find is that unusually generative environments display similar patterns of cre ativity at multiple scales simultaneously.

    So is there a “long zoom” view of data design?

    Page 102

    A recent experiment led by the German neuroscientist Ullrich Wagner demonstrates the potential for dream states to trigger new conceptual insights. In Wagner’s experiment, test subjects were as signed a tedious mathematical task that involved the repetitive trans formation of eight digits into a different number. With practice, the test subjects grew steadily more efficient at completing the task. But Wagner’s puzzle had a hidden pattern to it, a rule that governed the numerical transformations. Once discovered, the pattern allowed the subjects to complete the test much faster, not unlike the surge of ac tivity one gets at the end of a jigsaw puzzle when all the pieces sud denly fall into place. Wagner found that after an initial exposure to the numerical test, “sleeping on the problem” more than doubled the test subjects’ ability to discover the hidden rule. The mental recom binations of sleep helped them explore the full range of solutions to the puzzle, detecting patterns that they had failed to perceive in their initial training period.

    Page 105

    Thatcher and other researchers believe that the electric noise of the chaos mode allows the brain to experiment with new links between neurons that would otherwise fail to con nect in more orderly settings. The phase-lock mode (the theory goes) is where the brain executes an established plan or habit. The chaos mode is where the brain assimilates new information, ex plores strategies for responding to a changed situation. In this sense, the chaos mode is a kind of background dreaming: a wash of noise that makes new connections possible. Even in our waking hours, it turns out, our brains gravitate toward the noise and chaos of dreams, 55 milliseconds at a time.

    Is the notion of phase locking aligned with the two systems of thinking fast and slow?

    Page 110

    The shower or stroll removes you from the task-based focus of modern life—paying bills, answering e-mail, helping kids with homework—and deposits you in a more associative state.

    One strategy for encouraging serendipitous ideas is to switch thinking modes. Again, constructal flow’s diffusion and infusion.

    Page 112

    While the creative walk can produce new serendipitous com binations of existing ideas in our heads, we can also cultivate ser endipity in the way that we absorb new ideas from the outside world. Reading remains an unsurpassed vehicle for the transmis sion of interesting new ideas and perspectives. But those of us who aren’t scholars or involved in the publishing business are only able to block out time to read around the edges of our work schedule: listening to an audio book during the morning commute, or taking in a chapter after the kids are down. The problem with assimilating new ideas at the fringes of your daily routine is that the potential combinations are limited by the reach of your memory. If it takes you two weeks to finish a book, by the time you get to the next book, you’ve forgotten much of what was so interesting or provocative about the original one. You can immerse yourself in a single au thor’s perspective, but then it’s harder to create serendipitous colli sions between the ideas of multiple authors.

    Here Johnson characterizes one of the key constraints on knowledge management and innovation in knowledge work.

    Page 118

    So far in the chapter, Johnson has discussed the necessity for disorder in the process of coming up with new, unexpected ideas that connect several seemingly unrelated concepts into a solution for a problem. He discusses a couple of the contributing factors for this process: (1) toggling between focused and unfocused states (e.g., by taking a break from a problem you’ve been working on), (2) collecting many ideas and keeping them available to one another (e.g., by storing them in PKM systems or by engaging in intense periods of conceptual diversity, such as a reading vacation [conferences are arguably a source of this]). At this point in the chapter he is shifting to questioning the role of the Internet in these factors of serendipity, reflecting on how the Web affects analog patterns of accidental discovery.

  • ∎ Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 - Reading Session 202408121210

    Last updated Aug 12, 2024 | Originally published Aug 12, 2024

    Annotations of 𖠫 Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 from 20240812 at 12:10

    One way to do this is to create an open database of hunches, the Web 2.0 version of the traditional suggestion box. A public hunch database makes every passing idea visible to everyone else in the organization, not just management. Other employees can comment or expand on those ideas, connecting them with their own hunches about new products or priorities

    Crowdsolving platforms are a modern version of this, but I’m not sure it’s proving the utility of the concept.

    Page 128

    Johnson concludes with two vaguely-described mechanisms for organizational serendipity: (1) a semantic similarity index of all the work everyone is doing, so that efforts on one project might show up to another tangentially-related but organizationally-distinct project; and (2) a crowdsolving platform for ideas. These are regrettably kind of weak. We now know much more about serendipity and its enablers and inhibitors, and they go far beyond this kind of shallow platform-based solution: instead they address culture and power.

    Page 221

    for the sake of clarity, let’s not blur the line between “individual” and “network” by admitting to the discussion the prior innovations that inspired or supported the new generation of ideas. Yes, it is important that Gutenberg borrowed the screw-press technology from the winemakers, but one cannot say that the print ing press was a collective innovation the way, for example, the In ternet clearly was. So Gutenberg and Berners-Lee get classified on the individual side of the spectrum.

    I don’t think I agree with this scoping of the data. It’s an asystemic approach — it requires a belief that later innovations can be separated from earlier ones, and it (perhaps more problematically) hero-worships the supposed inventors. This does not necessarily agree with e.g., ontological design, in which “good moves in design space” will likely be filled by some actor because of the needs and niches of the ecosystem.

    Page 231

    Why have so many good ideas flourished in the fourth quad rant, despite the lack of economic incentives?

    Another important piece missing from this analysis is the choice of “most significant inventions.” This relates to my earlier criticism: the this view of the primacy and separability of a given invention erases all of the little requisite innovations that were necessary to unlock one of these “major” innovations to be possible. This view is a bit reductive, then, because it fails to account for the feedback loops between innovations big and small.

    Page 232

    That deliberate inefficiency doesn’t exist in the fourth quad rant. No, these non-market, decentralized environments do not have immense paydays to motivate their participants. But their openness creates other, powerful opportunities for good ideas to flourish.

    It seems strange to conduct this analysis without considering the other motivating incentives that exist for these innovators/innovations. Many of the innovations listed are the work of scholars — and sure, maybe they weren’t trying to invent some doohickey for a patent that will give them a lifetime of royalties, but they needed to publish good ideas to be considered a prestigious scholar. It would be interesting to take this same analytical approach but consider this and other kinds of incentives as well.

    Page 241

    If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruc tion of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessen ing their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of con finement or exclusive appropriation. Inventions then cannot, in nature, be a subject of property.

    Page 246

    The patterns are simple, but followed together, they make for a whole that is wiser than the sum of its parts. Go for a walk; cultivate hunches; write everything down, but keep your folders messy; embrace serendipity; make generative mistakes; take on multiple hobbies; frequent coffeehouses and other liquid networks; follow the links; let others build on your ideas; borrow, recycle, re invent. Build a tangled bank.

    It is remarkably telling that in this sentence — Johnson’s concluding call-to-action of this book — his advice for facilitating serendipity is simply to embrace it.

    It’s clear that we don’t know how to do this, yet. Not really.

  • ∎ Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 - Reading Session 202408121058

    Last updated Aug 12, 2024 | Originally published Aug 12, 2024

    Annotations of 𖠫 Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 from 20240812 at 10:58

    The secret to organizational inspiration is to build information networks that allow hunches to persist and disperse and recombine. Instead of cloistering your hunches in brainstorm sessions or R&D labs, create an environment where brainstorming is something that is constantly running in the background, throughout the organiza tion, a collective version of the 20-percent-time concept that proved so successful for Google and 3M.

    A decent premise for a design principle.

    Page 127

    One way to do this is to create an open database of hunches, the Web 2.0 version of the traditional suggestion box. A public hunch database makes every passing idea visible to everyone else in the organization, not just management. Other employees can comment or expand on those ideas, connecting them with their own hunches about new products or priorities

    Crowdsolving platforms are a modern version of this, but I’m not sure they’re proving the utility of the concept.

    Page 128

    Johnson concludes with two vaguely-described mechanisms for organizational serendipity: (1) a semantic similarity index of all the work everyone is doing, so that efforts on one project might show up to another tangentially-related but organizationally-distinct project; and (2) a crowdsolving platform for ideas. These are regrettably kind of weak. We now know much more about serendipity and its enablers and inhibitors, and they go far beyond this kind of shallow platform-based solution: instead they address culture and power.

  • ∎ Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 - Reading Session 202408121045

    Last updated Aug 12, 2024 | Originally published Aug 12, 2024

    Annotations of 𖠫 Where Good Ideas Come From- The Natural History of Innovation - Johnson - 2011 from 20240812 at 10:45

    Filters reduce serendipity

    This is too strong a claim. Certainly filters have some effect on the kinds of concepts, someone can encounter, but that may simply mean that they are more available to other clouds of concepts. I don’t think anyone can say that there is a direct relationship between filtering and serendipity.

    Page 121

    it’s true that by the time you’ve entered something into the Google search box, you’re already invested in the topic. (This is why Web pioneer John Battelle calls it the “data base of intentions.”)

    Database queries, as declarations of intention, is a really interesting framing.

    Page 123

    Google and Wikipedia give those passing hints something to attach to, a kind of information anchor that lets you settle down around a topic and explore the surrounding area. They turn hints and happy accidents into information. If the commonplace book tradition tells us that the best way to nurture hunches is to write everything down, the serendipity engine of the Web suggests a parallel directive: look everything up.

    Johnson argues that serendipity systems require that the serendipity-haver “look everything up”; chase every interest to fill out more connections.

  • Set up a hyperkey and globe key on iPadOS with a QMK keyboard

    Last updated Jul 30, 2024 | Originally published Jul 30, 2024

    I really wanted the experience this person is having, but with an iPad:

    So I recently picked up a Nuphy Air75 v2 keyboard (Wisteria switches) with the Nufolio case/stand.

    For a quick review of the experience: it’s pretty great! The keyboard itself sounds and feels excellent — much better than the iPad Pro’s Magic Keyboard, which I used begrudgingly with my previous iPad Pro. It has all kinds of great features that I’d consider basic at this point:

    • multi-device pairing (so I switch between using it with my phone and my iPad, a feature that has already been helpful when I needed to type a message on an app I don’t have installed on the iPad)
    • built-in kickstand feet at adjustable levels
    • F-row keys
    • customizable backlighting

    But most importantly, the iPad doesn’t need to be attached to it for it to work. Thus I can sketch and layout diagrams with typing, touch, and stylus, all at once, while the iPad is in my hands or lying flat. It’s quite nice.

    However, the keyboard’s layout is not what I’m used to. On the Mac, I’ve conventionally customized my keyboard with BetterTouchTool and other third-party services. Unfortunately, the iPad is a “console computer” and iPadOS isn’t a real operating system. So, iPadOS doesn’t allow developers to develop cross-system customization tools, and iPadOS only offers a very limited set of keyboard customization options. You can use Settings → General → Keyboard to change a few modifier keys around, but that’s about it unless you want to turn on full “accessibility mode”-customization. (Accessibility → Keyboard and Typing → Hardware Keyboard → Full Keyboard Access, IIRC.) This gives the keyboard all kinds of power over what you can select and act on in the OS, but it’s a bit intrusive and not necessary for my use-cases. Regardless, even the “Full Keyboard Access” options fail to let you do the really fancy keyboard customizations I’ve come to rely upon, such as setting up capslock as a hyperkey: changing the capslock key to function as esc when tapped and shift+alt+ctrl+cmd when held.

    Fortunately, the Air75, like most modern mechanical keyboards, is a QMK keyboard. That means that the keyboard itself is customizable on a firmware level: you can use tools like the VIA keyboard customization console to fundamentally change what the keys on the keyboard do. So, that’s what I did! Now, even on iPadOS, tapping capslock enters esc, and holding capslock gives me the shift+ctrl+alt+cmd modifiers all at once. This functionality is now just how my particular keyboard works, so it will work the same way on any device I connect the keyboard to, no operating system-level customizations or third-party services (such as BetterTouchTool or Hyperkey) required.

    Here’s a quick guide for you to do it, too (provided without warranties, guarantees, or support):

    • Purchase a QMK VIA-compatible keyboard
    • Follow the manufacturer’s instructions to set up the keyboard to work with VIA, if necessary (Nuphy’s instructions are here)
    • Open VIA and connect your keyboard (again following the manufacturer’s/VIA’s/QMK’s instructions to make sure it’s connected and authorized properly)
    • In VIA’s configuration tab, select the capslock key
    • In the customization options available at the bottom of the screen, choose the “Special” tab, then select ANY
    • Enter MT(MOD_HYPR, KC_ESC).1 This translates to:
      • MT: Modifier-tap, as in “act as the modifiers I specify when held, enter the keycode I specify when tapped”
      • MOD_HYPR: shift+control+alt+GUI, where “GUI” means cmd on macOS and win on Windows and… something on Linux, presumably (I avoid the warren of rabbitholes that is Linux for reasons that should be obvious if you’re reading this post)
      • KC_ESC: the esc key.

    Now, what about the “Globe” key? It has become increasingly useful on iPadOS in recent years, but for complicated reasons, it is currently not possible to add a globe key to a keyboard via VIA/QMK. Fortunately, iPadOS allows you to modify your modifier keys (as mentioned before: General → Keyboard → Hardware Keyboard → Modifier Keys). We can remap capslock to the globe key! Never mind that you just got rid of the capslock key: just use VIA to place capslock elsewhere, e.g., on your now-redundant escape key. Then, change your capslock to globe on your iPad, and you now have a globe key on your keyboard, too.

    Here’s my final layout:


    1. I think you could actually use HYPER_T(KC_ESC) here, which is basically a shortcut to what I’ve done, but I got MT(MOD_HYPR, KC_ESC) working so I decided not to try to experiment further. ↩︎

  • Softerware - techniques as technology

    Last updated Sep 15, 2023 | Originally published Sep 8, 2023

    As we develop our techniques and practices in knowledge innovation, we tend to find certain workflow patterns that we do over and over again. Knowledge workers are increasingly finding ways to augment these patterns with automations, macros, shortcuts, and other such tool add-ons. In the process of developing and refining these patterns with automation, we are writing “softerware”. Softerware is the layer of practices and protocols between us — users — and the tools we are using to achieve our goals.

    For instance, my approach to literature review is basically a semi-systematic literature review (Okoli, 2015). When I have a research interest that I’ve not explored in detail before, I begin by searching the same several databases with the same search techniques, opening each potentially-interesting item in a new tab until I’ve reached some point of saturation (e.g., I am no longer finding interesting-looking items relevant to my interest at the time). Then, I’ll go through each tab, screening each article more critically. If an article passes my screen, I’ll then add it to my collection of literature on the subject by saving it (and its metadata) with Zotero, and finally I import the newly-collected items into Bookends.

    There’s a lot of hand-waving there, but the details don’t really matter. What I’ve highlighted in the above description of my workflow are the elements of softerware: the events where what I do depends especially on how I do it. These events are important because, over time, that causal relationship runs in both directions. Over time, I change how I do something, and that influences what I do.

    Another important feature of softerware is that it tends to be unpublished. These workflows are crafted in private, perhaps not explicitly or even intentionally by the worker. So the best softerware in the world may not be known to anyone but the person who made it!

  • Note-taking apps (and practices) do make us smarter

    Last updated Aug 26, 2023 | Originally published Aug 26, 2023

    Platformer’s Casey Newton published an interesting piece on note-taking apps yesterday: Why note-taking apps won’t make us smarter.

    It’s a fairly rich piece in that it clearly lays out a number of challenges experienced by people participating in knowledge management and innovation. These challenges — such as the fundamental idea that these tools should be designed to help us think, not just to collect things — are important.

    However, I have to disagree with the headline.

    Does a hammer1 make someone stronger?

    No.

    But you can do a lot more with a hammer than you can do with your fists.

    Same goes for note-taking apps.2

    These are simply tools3 that offer us different ways to work with the material of our thoughts — our notes! — to shape them into whatever we want to.

    But, as many have described already, it’s how we use the tool that matters.

    What won’t make us smarter is to do what Casey describes in the article:

    I waited for the insights to come.

    And waited. And waited.

    Marginalia

    This was originally published in a Mac Power Users forum discussion.


    1. …There will come a day when my hammer metaphor is bent so far that it breaks, but today is not that day. ↩︎

    2. And same goes for our note-taking practices, which is often overlooked in articles like the OP but, as many have already discussed here, is the thing that actually matters. Latour and co. had it right. It’s not the person, and it’s not the tool, it’s person + tool. Or: we shape our tools, and they shape us. ↩︎

    3. Or thinking environments, even. ↩︎

  • AI is the new plastic

    Last updated Jul 18, 2023 | Originally published Jul 18, 2023

    Data was the new oil, and now AI is the new plastic.

    From user yabones on HN, discussing media companies’ blatant strategies for using “AI”-based text generators to spam content for Google Search Engine Optimization.

  • Permissionless integration

    Last updated Jul 3, 2023 | Originally published Jul 3, 2023

    In a recent blog post, kepano writes about the power of files for enabling users to have access to and use of their data in the long-term:

    File over app is a philosophy: if you want to create digital artifacts that last, they must be files you can control, in formats that are easy to retrieve and read. Use tools that give you this freedom.

    File over app is an appeal to tool makers: accept that all software is ephemeral, and give people ownership over their data.

    This reminded me of a related insight I had about files over apps a few years ago: the friction-free power of permissionless integration.

    User data should be like a piece of wood on a workbench: you can pick up hammers, drills, screwdrivers, nails, paint, saws, and all kinds of other tools and materials and make that wood into what you want. No special access or permission is required to cut or sand or shape that block of wood. You just pick the right tool for the job and do what you want. It’s your wood, your workbench, and your tools.

    Digital tools should be built so that users can work with their data in the same fashion. Apps should be able to interact with one another to help users shape and learn from their data — their notes, models, drawings, spreadsheets, or whatever — without needing special interfaces to do so.

    This is possible with files. Using files (especially files with open, standard file formats) removes the need to develop special ways of working with user data.

    On the other hand, app-specific data structures create friction and lock-in. To read and change your data in one of these apps, you need to deal with exporting and importing, or only use tools that have been custom-designed to work nicely together (i.e., via an API). Use one of these tools to create and save your data and suddenly that data can only be shaped by a limited selection of other tools.

    I’m sorry, your subscription for this pen has expired. Please use another Ink Pro-compatible pen or resubscribe for just $3/month per pen (billed annually).

    This creates some ferocious friction. Imagine picking up a piece of paper with your latest grocery list on it. You go to add “Bananas” to that list … only you don’t have the pen you first wrote the list with, and none of your other pens will work with that sheet of paper. Then, when you find the original pen, your monthly subscription to it is expired, and so anything you wrote with that pen is now read-only.

    That’s a scary thing. Remember, we shape our tools, and our tools shape us.

    Being shaped by tools you haven’t shaped is not something anyone should want.

  • 𖠫 Plotting Your Scenarios - Ogilvy and Schwartz - 1998

    Published Jun 16, 2023

    # Plotting Your Scenarios - Ogilvy & Schwartz - 1998

    In this chapter of Learning from the Future, Ogilvy and Schwartz present a classic technique in scenario planning: using critical uncertainties.

    Ogilvy, J., & Schwartz, P. (1998). Plotting Your Scenarios. In L. Fahey & R. M. Randall (Eds.), Learning from the future: competitive foresight scenarios. Wiley. https://www.wiley.com/en-us/Learning+from+the+Future%3A+Competitive+Foresight+Scenarios+-p-9780471303527

  • 𖠫 Candy - 2013 - Time Machine Reverse Archaeology

    Published Jun 16, 2023

    # Candy - 2013 - Time Machine Reverse Archaeology

    Candy, S. (2013). Time Machine / Reverse Archaeology. In (pp. 28-30). PCA Press.

    # Annotations

    1
    2
    3
    4
    5
    
    let summaryPages = dv.pages('"Notes"')
    	.where(p => p.file.name.includes("∎ Candy - 2013 - Time Machine Reverse Archaeology"))
    	.map(p => [p.file.link, p.annotation_status])
    
    dv.table(["Annotation Summary", "Status"], summaryPages)
    
  • ∎ The Serendipity of Streams - Reading Session 202306081350

    Last updated Jun 8, 2023 | Originally published Jun 8, 2023

    The Serendipity of Streams

    A neat article about the structure of (digital) streams of information and their propensity for serendipity and innovation.

    A stream is simply a life context formed by all the information flowing towards you via a set of trusted connections — to free people, ideas and resources — from multiple networks.

    What makes streams ideal contexts for open-ended innovation through tinkering is that they constantly present unrelated people, ideas and resources in unexpected juxtapositions. This happens because streams emerge as the intersection of multiple networks.

    This means each new piece of information in a stream is viewed against a backdrop of overlapping, non-exclusive contexts, and a plurality of unrelated goals. At the same time, your own actions are being viewed by others in multiple unrelated ways.

    As a result of such unexpected juxtapositions, you might “solve” problems you didn’t realize existed and do things that nobody realized were worth doing. For example, seeing a particular college friend and a particular coworker in the same stream might suggest a possibility for a high-value introduction: a small act of social bricolage. Because you are seen by many others from different perspectives, you might find people solving problems for you without any effort on your part. A common experience on Twitter, for example, is a Twitter-only friend tweeting an obscure but important news item, which you might otherwise have missed, just for your benefit.


    [In a stream, t]he most interesting place to be is usually the very edge, rather than the innermost sanctums.

    Not sure I agree with this. The author is binding a bunch of factors into “interesting,” but the truth is that there are different kinds of power here, and whether you want to be in the center or at the edge depends on what you’re trying to do.

  • On serendipity and knowledge

    Last updated May 30, 2023 | Originally published May 30, 2023

    A great debate in the philosophy of knowledge (where knowledge is defined as “justified true beliefs”) is known as the “Gettier problems.” The debate is this: if you think you know something, and that something turns out to be true, but not for the reasons you thought … does it count as knowledge?

    I tend to agree with the pragmatic view of Gettier problems. Basically, the only thing that matters is whether used knowledge is fruitful for the reasons that the knowledge was justified, true, and believed.

    This has implications for serendipity. In serendipitious observations, the knowledge we generate was not necessarily justified or believed a priori. Only in retrospect does the “knowledge” become useful.

  • A PopClip extension for highlighting text in Obsidian

    Last updated Mar 24, 2023 | Originally published Mar 24, 2023

    A simple extension for PopClip that will present an “insert highlight” icon when you select text in Obsidian.

    1
    2
    3
    4
    5
    6
    7
    8
    
    #popclip 
    name: Highlight
    required apps: [md.obsidian]
    requirements: [text, cut]
    actions:
    - title: Highlight # note: actions have a `title`, not a `name`
      icon: iconify:ant-design:highlight-twotone
      javascript: popclip.pasteText('==' + popclip.input.text + '==')
    
  • Theory of Systemic Change and Action

    Last updated Mar 7, 2023 | Originally published Mar 7, 2023

    Theories of Change are one of the fundamental tools of changemakers and program evaluation (Mackinnon, 2006). However, when addressing wicked problems (Rittel & Webber, 1973, theories of change are too reductive and linear to properly account for the systemic phenomena, structures, and dynamics that perpetuate the issues we’re trying to address (Murphy & Jones, 2020).

    Theories of Systemic Change and Action (ToSCA) are a systemic design tool that combine theories of change with systemic understanding. The result is a theory of change that is useful for understanding, communicating, and evaluating systemic change projects.

    Here’s a rough guide to develop a ToSCA:

    1. Model the system (e.g., with causal loop diagrams; Kim, 1992).
    2. Develop systemic strategies from the model.
    3. Reorganize the modelled phenomena. From left to right:
      1. Capability building and resource mobilization for the initiative (Inputs)
      2. Inteventional activities the initiative will take on (Activities)
      3. Immediate outputs of those activities (Outputs)
      4. Results of those outputs on the overall system (Outcomes)
      5. Downstream effects of those outcomes on higher-system structures (Impacts)
    4. Reiterate on step 3 as necessary.

    The resulting diagram will look somewhat like an iceberg model (Stroh, 2015, p. 46] on its side: visible events and behaviour are on the left, while the actual patterns and structures in the system fall to the right.

    The ToSCA can then be simplified as necessary to suit different needs. For instance, if presenting the model briefly to a potential funder, you may want to collapse major feedback loops into one element on the model with a “loop” icon. This way you can still show inputs and outputs on that loop while obscuring the complexity within it for the purposes of the presentation.

  • A Case Study of Theories of Systemic Change and Action — The Ecotrust Canada Home-Lands initiative

    Last updated Mar 7, 2023 | Originally published Mar 7, 2023

    In this presentation, we reported on a case study of the Ecotrust Canada Home-Lands initiative. Lewis and I worked with Ecotrust Canada to understand the challenges they were addressing from a systemic design lens and, using that approach, to develop a theory of systemic change and action for the initiative.

    An interesting development in the work was the development of a novel systemic evaluation technique: resonance and dissonance tests. The tests were designed as a way of testing our understanding of the system without interrupting or intruding on the processes of the initiative. The general idea behind resonance and dissonance tests is to identify a set of phenomena in your understanding of the system and to search for disconfirming evidence that those phenomena are complete and accurate. So, for instance, if you think a key phenomenon in the system is “community distrust of bureaucracy”, look for examples of the community trusting bureaucracy. If you can’t find any, it increases the integrity of the theory you’ve created.

  • Systemic Evaluation

    Last updated Mar 7, 2023 | Originally published Mar 7, 2023

    Systemic evaluation is the developmental evaluation (Guijt et al., 2012) of systemic change.

    Techniques for systemic evaluation combine conventional principles and tools of developmental evaluation with concepts from systemic design. These techniques provide changemakers with the ability to assess the accuracy and completeness of their theories of systemic change and action (Murphy & Jones, 2020). They also allow evaluators to examine the progress of systemic strategies (Murphy et al., 2021).

  • Leverage theory

    Last updated Mar 7, 2023 | Originally published Feb 24, 2023

    We seek leverage to find the best ways of making change.

    Leverage points are places in systems where a little effort yields a big effect (Meadows, 1997). They are also ideas that help us grab on to strategic ways forward when we’re working in complexity (Klein & Wolf, 1998).

    Acting on leverage points may accelerate systemic change towards progress and reform, but acting on the wrong ones may instead accelerate systemic change towards regression and deformity. Well-designed leverage strategies may be catalyzing or even transformative, but poorly designed ones may merely be futile (figure 1).

    One way of finding leverage points is to think through your system with reference to Meadows’s (1997) 12 types:

    Table 1. Twelve types of leverage points, in order of increasing power (adapted from Meadows, 1997).

    Twelve types of leverage points, in order of increasing power Example
    12. Constants, parameters, numbers (such as subsidies, taxes, standards) Wages, interest rates
    11. The sizes of buffers and other stabilizing stocks, relative to their flows. Current levels of debt/assets
    10. The structure of material stocks and flows (such as transport networks, population age structures) An individual’s financial structure (e.g., fixed costs and incomes)
    9. The lengths of delays, relative to the rate of system change How long it takes to find a higher-paying job
    8. The strength of negative feedback loops, relative to the impacts they are trying to correct against Rising costs of living vs. fixed income
    7. The gain around driving positive feedback loops Recession causing reducing spending
    6. The structure of information flows (who does and does not have access to what kinds of information) How aware you are of impending recession/future rising costs
    5. The rules of the system (such as incentives, punishments, constraints) Who suffers as a result of poorly-managed recessions
    4. The power to add, change, evolve, or self-organize system structure Central banks, Ministries of Finance
    3. The goals of the system GDP Growth
    2. The mindset or paradigm out of which the system—its goals, structure, rules, delays, parameters—arises Growth above all
    1. The power to transcend paradigms Sustainable development, flourishing

    Another approach, which may be complementary to the above, is to model the system as a causal loop diagram (e.g., Kim, 1992) and then to conduct leverage analysis (Murphy & Jones, 2020) on the model.

    An understanding of leverage in a system allows us to generate systemic strategies (Murphy & Jones, 2020). These strategies can also be adapted into Theories of Systemic Change (Murphy & Jones, 2020).

    # Background

    Donella Meadows (1997) popularized the idea of leverage in systemic change with her essay “Leverage Points: Places to Intervene in Complex Systems.” She proposed a typology of phenomena in a system, suggesting that acting on certain types of phenomena are higher-leverage than others.

    In an article published in the Contexts journal of systemic design, I challenged Meadows’s (1997) paradigm, proposing a few other possible ways of viewing leverage. My aim was to link the search for leverage directly to the design of powerful strategies for systemic change, and to propose a few ways forward in advancing our understanding of leverage in complex systems.

  • Using leverage analysis for systemic strategy

    Last updated Mar 7, 2023 | Originally published Jun 21, 2020

    The map represents your current mental model of how this system works.

    Leverage analysis examines the patterns of connection between phenomena (using algorithms adapted from social network analysis and graph theory) in order to present relative rankings of the phenomena of the system.

    These rankings are entirely dependent on the structure of the map. All phenomena are equal, and all connections are equal. It is theoretically possible to encode the degrees to which one phenomena influences another in strict mathematical terms and formulae. In turn, we could represent the map as a systems dynamics model and use it to simulate the behaviour of the system. However, this is usually impractical, especially with imprecisely-understood or hard-to-quantify concepts (e.g., what exactly is the rate of change in wildlife due to climate change, or how exactly does culture influence conspicuous consumption?)

    For this reason, using leverage analysis is a fuzzy procedure. It depends on your intuition. Fortunately, the goal of leverage analysis is not to inductively estimate how the system will change, nor deductively falsify hypotheses about the system. Instead, using leverage analysis for strategic planning involves abductive logic: the generation of creative, useful conclusions from a set of observations.

    The goal here is to look at the model as it is rendered and to think creatively about strategic opportunities. Broadly, this means asking several questions:

    • “What is missing?”
      • If some major gap in the logic of the model is missing, it means that the associated phenomena haven’t been adequately discussed in this process. Why is that? What might it mean for strategic planning?
    • “What must be true?”
      • If this is how the system currently works, what must be true about how it should work?
    • “Where do we work?”
      • Based on your organization’s strategic capabilities and advantages, what phenomena do you hold influence over? How do the effects you have on the system relate to these phenomena?
    • “What do we aim to influence?”
      • In other words, what phenomena do you really want to change? In what way should they change?

    These questions can be answered via the following process.

    # Developing Systemic Theories of Change

    The systems map represents a kind of high-complexity theory of change: it describes how all of these phenomena interlock and respond to one another. We can therefore use leverage analysis to weave systemic theories of action:

    1. Identify the goal phenomena. What do we want to influence? What’s the ultimate impact we aim to have?
    2. Identify the opportunities within our control. What phenomena are we already influencing? What could we be influencing without developing a lot of new capacity?
    3. “Walk” the paths on the map between your chosen opportunities, any possible high-leverage phenomena, and your goals. As you do:
      1. Identify any key strategic options along the path. What kinds of activities or programs could you engage in to influence these phenomena in the right way?
      2. Identify any feedback loops. How do these paths grow, shrink, or maintain balance over time?

    The chains of phenomena (and any loops they connect with) that result from the three steps above are the seeds of systemic strategy. Use them to identify key intervention points for programming (e.g., how might you take advantage of high-leverage phenomena? how might you address bottlenecks?), signals for monitoring and evaluation, and to communicate your theory of change/theory of action to others.

  • Systemic strategy

    Last updated Mar 7, 2023 | Originally published Mar 6, 2023

    Systemic strategies use system phenomena, structure, and dynamics to help changemakers achieve their goals. These goals may simply be some specific outcome or objective, or they may include systemic change.

    One approach to designing systemic strategies is:

    1. map the system
    2. analyze the system for features of leverage, possibly using leverage analysis
    3. identify any “goal” phenomena: the events or behaviours you seek to change
    4. identify any “intervention” phenomena: things you have direct influence over
    5. “walk” from the the interventions to your goals to identify a theory of change, incorporating the features of leverage you find along the way

    Each pathway you walk forms a strategy tree.

    Are there other interventions that lead to the same goal? These are different roots for the same overall strategy.

    Strategy trees can be combined into a strategy “forest”. A strategy forest is a collection of paths from interventions to goals in the system. Strategy forests can be assessed for different qualities to gauge which strategies an initiative should pursue.

    1. Different strategies that share common interventions may be the easiest to invest in and implement.
    2. Combinations of strategies that are the self-perpetuating (e.g., that contain feedback loops that will innately drive their success) may be more valuable to pursue.
    3. These forests can also be tested (e.g., with wind-tunneling; @VanderHeijden1997Scenarios-Strategy-Strategy-Process, p. 23) to identify the best combinations of strategies to follow.

    Once strategies have been selected, identify the capabilities or resources that need to be invested in/mobilized in order to effectively target the chosen interventions, and set up systemic evaluation processes to continually test the completeness and accuracy of your strategic theories and to assess progress towards achieving the strategies’ goals.

  • Towards a theory of leverage for strategic systemic change

    Last updated Feb 24, 2023 | Originally published Feb 24, 2023

    My article “Leverage for Systemic Change” was recently published in the inaugural edition of Contexts, from the Systemic Design Association.

    The article ultimately proposes a few key directions for a research agenda on leverage in systemic design (see the table below).

    Table 1. A research agenda for leverage theory in systemic design

    Research area Research questions Existing research Possible studies Possible contributions
    Dimensions of leverage - Is Meadows’s (1997) typology complete?
    - What other features of the “physics” of systemic change might matter?
    - System characteristics (Abson et al., 2017)
    - Conditions for systemic change Kania, Kramer, & Senge, 2018)
    - Other types of phenomena (e.g., bottlenecks, signals; Murphy & Jones, 2020)
    - Relative leverage: chaining leverage points (Fischer & Riechers, 2019)
    - Relative leverage: the context of the changemaker (Klein & Wolf, 1998)
    - Recursive leverage
    - A systematic literature review (Okoli & Schabram, 2010) of leverage points, especially using forward citations (Haddaway et al., 2022) from (Meadows, 1997)

    - Understanding the nature of leverage and other mechanisms of change potential in systemic change
    Methods for leverage - What methodologies are best to identify and select leverage points?
    What kinds of evidence will help validate leverage?
    - How might systemic designers design theories of change (Gregor & Jones, 2007) for leverage theories?
    - How might systemic designers limit indeterminism (Lukyanenko & Parsons, 2020) in leverage theories?
    - Meadows’s (1997) typology’s order of effectiveness
    - Leverage analysis [Murphy & Jones, 2020]
    - Assessing potential for change (Birney, 2021)
    - Surveying practitioners in systemic design on how they identify, assess, and address leverage points to identify common habits and best practices - How to identify phenomena useful for leverage
    - How to evaluate and compare possible leverage points in the analysis phase
    - How to evaluate the effectiveness of chosen leverage points with evidence gathered from implementations
    Strategy with leverage - How is leverage best used in developing strategic plans for systemic change?
    - How are leverage-based strategies best presented and communicated?
    - How are leverage-based strategies best evaluated and measured?
    - Systemic strategy (Murphy & Jones, 2021)
    - The epistemic benefits of a leverage points perspective (Fischer & Riechers, 2019)
    - Identifying conditions for systemic change (Kania et al., 2018)
    - Relative leverage: chaining leverage points (Fischer & Riechers, 2019)
    - Relative leverage: the context of the changemaker (Klein & Wolf, 1998)
    - “Systemic change labs” tracing and comparing the impact of interventions using different kinds of leverage
    - How to use leverage to develop better strategies for systemic change
    - How to account for relative context in the design of high-leverage strategies
    Execution on leverage - What are the best ways to target different kinds of leverage for systemic change? (E.g., how might we help actors in a system track all of the relevant paradigms?) - Fruitful friction as a tactic for transcending paradigms (Buckenmayer et al., 2021)
    - Systemic change happens via multiple dimensions of change (Mulder et al., 2022)
    - Design Journeys offers several chapters on taking action after identifying leverage points (Jones & Ael, 2022)
    - “Systemic change labs” tracing and comparing the impact of interventions using different kinds of leverage - How to design innovations for each type of leverage

    Some other key takeaways:

    • The concept of “leverage points” dominates modern discussions of leverage, but as Meadows (1997) herself proposed, that is just one paradigm we can use to view the best ways to produce systemic change.
    • There are good and bad kinds of leverage points! See figure 1.
    • A few promising insights about leverage have been proposed recently, such as the notion of “chains” of leverage points (Fischer & Riechers, 2019) and the idea of assessing potential for change (Birney, 2021).

    Leverage points can be futile, catalyzing, or transformative, and they progressively reform or regressively deform our systems.

  • 𖠫 Murphy and Jones - 2020 - Leverage analysis A method for locating points of influence in systemic design decisions

    Last updated Feb 16, 2023 | Originally published May 26, 2022

    In this paper, Peter and I show how centrality analysis and structural dominance analysis can be used to identify different types of leverage points in systems models.

  • How to learn the most from the Obsidian community

    Last updated Feb 14, 2023 | Originally published Feb 14, 2023

    1. Think of a curiousity or a question
    2. Search!
      a. … the web (e.g., via the unofficial Obsidian community search engine) b. … the Forum for information ( use the forum’s Advanced Search capabilities)
      c. … the Discord server ( learn more about power search tools in Discord here)
    3. Ask on the Discord or the forum! (Make sure you review the list of channels in Discord to find the best place to post.)
    4. Get more ideas and start again at (1) 😉

    If you learn to make the most of the different pools of knowledge, you can always find rich answers — and you’ll ask better questions, too.