Data Science

Themes and Conferences per Pacoid, Episode 6

Paco Nathan2019-02-04 | 38 min read

Return to blog home

In Paco Nathan's latest column, he explores the role of curiosity in data science work as well as Rev 2, an upcoming summit for data science leaders.

Introduction

Welcome back to our monthly series about data science. This month, let’s explore two themes in detail: take an in-depth look at Rev 2 and exploring how curiosity plays a vital, and perhaps non-intuitive, role in data science work. We’ll unpack curiosity as a core attribute of effective data science, look at how that informs process for data science (in contrast to Agile, etc.), and dig into details about where science meets rhetoric in data science. Spoiler alert: a research field called curiosity-driven learning is emerging at the nexis of experimental cognitive psychology and industry use cases for machine learning, particularly in gaming AI. That body of work has much to offer the practice of leading data science teams. Modernists and structuralists beware. We’ll also take a peek at industry survey analysis which describes where organizations get stalled in the data game. Overall, these topics are among the themes you can expect at the next Rev, see you there!

Rev 2

We’ve been putting together the keynotes and tracks for Rev 2 and I’m super excited about what’s in store. This edition of the conference will be held May 23–24, 2019 in NYC, where we’ll focus on data science as a team sport: leadership, best practices, and how teams work. For example, one of the questions which has provided a guide throughout our planning stages is: “What can data science teams learn from each other?

For this second edition of Rev the conference is doubling in size. We’ll have a Leadership track, a Practitioner track, plus AMAs (“ask me anything” with notable experts) and industry poster sessions. Most people who went through grad school have some inkling of what the phrase “poster session” implies. In other words, your talk didn’t quite stand out enough to put onstage, but you still get “publish or perish” credits for presenting. This is not that. People who attended JupyterCon 2017–2018 can attest, an “industry poster session” includes an open bar, catered hors d’oeuvres, lots of mingling … to paraphrase feedback from JupyterCon, “As a tech person, would I get up extra early to meet strangers for coffee at 8:00 am? Yeah, not so much. But would I enjoy a brew or two at 6:00 pm while chatting with people interested in similar tech topics? Of course!!” Some vendors showcased their customer use cases. Some open source projects shared more details than they could in a session talk. Lots of people got wildly creative – like the 3 meter roll “poster” which took over the floor. Hotel staff had to nudge people out of the hall, long after that event was supposed to close. Join us in May in NYC for more experiences like that at Rev 2.

Our headlining keynote speaker will be Daniel Kahneman, Nobel laureate and author of Thinking, Fast and Slow, who has important matters to discuss with us about how groups make collaborative decisions, especially in the context of AI. For a sample of some of Dr. Kahneman’s recent work, check out this Harvard Business Review (HBR) article, “Noise: How to Overcome the High, Hidden Cost of Inconsistent Decision Making.” He examines bias and noise in decision making within a team context: “Where there is judgment, there is noise—and usually more of it than you think.” Lately there’s been lots of dialog about bias in machine learning models, though considering the noise part of the equation is an important corollary.

We’d love to hear your insights and perspectives about running data teams. Please join us and other teams to explore how data science teams work and work together!

Curiosity is more than a rover

At our “Data Science vs Engineering: Does it really have to be this way ” panel discussion last November, Amy Heineike from Primer AI described curiosity plus a learning mindset as core attributes for successful data scientists:

If you have a very expansive view of what you’re responsible for, if you’re curious about a lot of that puzzle, and at your point, if you have ownership over a lot of that puzzle…so if your end goal is “I want to build and ship something interesting,” that takes away some of the tension.

and also:

I feel like every six months I kind of take stock and realize there’s some enormous new thing that I have to learn that I have no idea how to do. And there’s normally like a key moment where you’re like, “okay. I got this.”

So very well said.

Recently, Eric Colson from Stitch Fix posted an update of his HBR article about the role of curiosity in data science: “Let Curiosity Drive: Fostering Innovation in Data Science”—plus a more nuanced update posted on LinkedIn. Note that Eric Colson presented “Differentiating By Data Science” at Rev in 2018 – an example of the kinds of premium quality talks you’ll hear at Rev this year!

In the HBR article, Eric describes curiosity as a powerful motivating force, driving exploration in data science—often in ways that could not have been imagined or directed through top-down management edicts:

…innovative capabilities aren’t so much designed or envisioned as they are discovered and revealed through curiosity-driven tinkering by the data scientists.

Plus the related point:

Are they wasting their time? No! Data science tinkering is typically accompanied by evidence for the merits of the exploration.

At this point I’m resisting an urge to quote and analyze nearly all of that HBR article.

You’ll need to read it :) That said, one important line to highlight:

The insights and resulting capabilities are often so unintuitive and/or esoteric that, without the evidence to support it, they don’t seem like a good use of time or resources.

Huh. Bells and whistles should start going off once you notice how that statement is a reasonably good characterization of one of the perennial challenges in machine learning (e.g., hill climbing) (Clue #1). Also, it helps us differentiate between process that’s apt for data science work and process for software engineering, i.e., Agile, Kanban, and so on.

Articulating process for data science

Eric’s article describes an approach to process for data science teams in a stark contrast to the risk management practices of Agile process, such as timeboxing. As the article explains, data science is set apart from other business functions by two fundamental aspects:

  • Relatively low costs for exploration
  • The ability to measure results (risk-reducing evidence)

These two points provide a different kind of risk management mechanism which is effective for science, specifically data science.

Of course, some questions in business cannot be answered with historical data. Instead they require investment, tooling, and time for data collection. Given the two points above, that’s okay—there are good ways to direct data exploration toward ROI.

Even so, there’s an important caveat—and note how its effect within a team context bears striking resemblance (Clue #2) to what we called the A* search algorithm in earlier days of theoretical AI:

With its low-cost exploration and risk-reducing evidence, data science makes it possible to try more things, leading to more innovation… But you can’t just declare as an organization that “we’ll do this too.”

To outline that caveat, I count four main points in the article describing necessary conditions which a successful organization must put into practice:

  1. Position data science as its own entity—make it its own department, reporting to the CEO.
  2. Equip data science teams with the technical resources they need to be autonomous: full access to data, elastic use of compute resources (read: cloud).
  3. Ensure a culture that supports a steady process of learning and experimentation.
  4. Provide the data science teams with a robust data platform.

Point #3 echoes learning as the companion to curiosity, as Amy Heineike described. That requires a team culture, fostered by effective management, mentoring, and so on. Arguably this is what data science leaders bring to the party. Or not.

Point #4 is where, IMO, many organizations tend to falter, then overcompensate by getting medieval via their data engineering teams—effectively erecting yet more silos. A best practice is to invest in building a robust data platform, staff that platform appropriately, then let your data science teams grow into the expectation of providing generalist roles. That’s so much better than handing out hyper-specialized titles, spending 5x as much budget, but yielding half the results in return. YMMV.

Silos are one of the reasons that data science emerged as a discipline. Silos top the list of “Why enterprise organizations fail?” with respect to leveraging data. The all-too-human reaction to complexity—responding with urge to compartmentalize, to categorize. That’s where data science, an inherently multi-disciplinary field, provides solutions that leverage the power of collaboration, science, computing power, data at scale, etc. Curiosity explored within a multi-disciplinary context is an antidote for the silo anti-pattern.

In my experience, hyper-specialization tends to seep into larger organizations in a special way… If a company is say, more than 10 years old, they probably began analytics work with a business intelligence team using a data warehouse. That approach probably created data silos between divisions, due to costs, budgets, accounting procedures, etc. Perhaps the company wanted to introduce more advanced techniques by the 2010s, so they added a data science team because their BI people were reluctant to upskill into NoSQL tools, machine learning, etc. Later, that team bifurcated into data science and data engineering teams without clear separation between the priorities of product managers on the one side and the operations team on the other side. Friction ensued. After about 2015, the firm followed the herd by introducing yet more sophisticated machine learning techniques; however, since the data science team was already swamped with reporting requirements, they introduced a new team of machine learning engineers. So far, I count four different teams and titles across the organization. In larger firms that approach will be replicated in each division such that there are several teams competing internally. Their directors become frenemies competing for resources, headcount, funding, key projects, big wins, and so on. At that point, adopting highly specialized names for each team becomes a defensive move, otherwise the CEO and CFO might step in and consolidate teams and budgets.

Hyper-specialization of data-related titles is a corporate CYA maneuver to defend silos and avoid the cost of upskilling. However, there’s generally more upside when you invest in those costs instead of avoiding them. Providing a robust data platform, pushing data science teams toward generalist roles, and upskilling create the cocktail antidote for the hyper-specialized titles anti-pattern.

Admittedly, I foresee some tensions arising from Point #2 between the difficult of finding “full-stack” data science candidates, tendencies in enterprise to create silos, and the costs of coordinating across teams:

But before you jump in and implement this at your company, be aware that it will be hard if not impossible to implement at an older company.

In other words, leadership in business is hard. So is global competition. Point #1 is the nut, which many organizations won’t be able to crack. However, look for effective case studies. Stitch Fix stands as an exemplar for how to run a highly effective data science practice. So please take that HBR article about Stitch Fix as part of a business case study and as a companion to another important part.

Anti-quelling device

I like so many things about Eric’s article:

  • Unpacking the role of curiosity in data science.
  • Curiosity as an antidote for a typical enterprise anti-pattern: in the absence of narrative, people make up stories (read: myths) from whatever partial data is at hand that serves the immediate interests(see chapter 2 in Understanding Context by Andrew Hinton (2014).
  • Generalists as an antidote for another typical enterprise anti-pattern: the proliferation of silos.
  • Echoing Amy’s point about ownership as a driver for teams.
  • Resilience: “It’s hard to quell curiosity.”

That anti-quelling bit is brilliant:

By providing ownership of business goals and generalized roles, tinkering and exploration become a natural and fluid part of the role. In fact, it’s hard to quell curiosity. Even if one were to explicitly ban curiosity projects, data scientists would simply conduct their explorations surreptitiously. Itches must be scratched!

Years ago I led a data science team at a large ad network. We had proposed doing geospatial analysis of our clickstream data, to surface regional trends which might be useful. However, execs at the company responded with “No don’t waste valuable time on geospatial work. We tried it before and the results weren’t meaningful.” Even so, our team’s curiosity could not be quelled. After lots of data prep, lat/lon, and PAM clustering on a mix of Hadoop and R, we produced large-ish clusters of similar interests (>10,000 people in each).

I’ll never forget the biggest cluster which centered roughly in Arkansas/Oklahoma: males, age 50+, who shared online shopping preferences for hammock, flag pole, sea salt, mail-order steak, portable generator, and defibrillator. I double-dare you not to visualize that cohort! (And whatever you do, please don’t share this with Cambridge Analytica.) When execs at the company got their grubby paws on our analysis, two senior VPs spent an entire day pouring through pages of data visualizations and analysis. Cackling, strategizing, and eventually thanking us for blatant acts of curiosity.

The road to a nation

Thinking more about curiosity…it’s an animal quality which has played such a vital role in the history of science. Initially it was also one of the drivers for engineering education—which perhaps emerged from some unexpected places? Check out a discussion by Genevieve Bell on The Next Billion Seconds about how engineering education was first established at École Polytechnique, and how that served as a pattern for creating West Point (full disclosure: I attended the latter and have fond memories of highly disciplined expositions about calculus in four colors of chalk at 6:30am, following the École pattern).

After the French Revolution, leaders realized that the priesthood had controlled the road system of 18th-century France and along with that the supply chain for feeding the French population. So they created École Polytechnique to train engineers, who could then manage roads and other infrastructure needed for national priorities. In the newly formed United States, West Point launched with instructors from École Polytechnique, borrowing its methods and many of its intents. MIT, CalTech, and other institutions used this pattern later to create their engineering programs.

Somewhere in the current tensions between data science and data engineering, I have a hunch that the curiosity of early engineering programs—ibid., that early West Point experience in the US—seems lacking. Even so, the more effective organizations have figured out how to cultivate curiosity w.r.t. data science and use it as a competitive advantage. Sort of the opposite of having a priesthood control a nation’s civil infrastructure. Given the talent crunch for staffing data science teams with experienced people, that focus on curiosity is also becoming a competitive differentiator. It creates a more sought-after work environment.

Consequently, I believe that “Agile” and “Lean” concepts are especially poor fits for managing data science teams. Those older approaches to software engineering process cause data teams to overlook opportunities, erode an organization’s competitive differentiation, and reinforce enterprise anti-patterns which we are trying to remove in the first place. Bear with me for a tangent while we build a foundation for this point.

Brace yourselves for impact

Bokay, I promise that this section is 100% about data science. First, the argument requires some framing and background, then we’ll have a foundation for exploring why the science in data science is different. In my experience, people outside of the field trip over this aspect in particular. Here are some arguments about why they do and how understanding the science aspect is so important in the workplace.

Let’s roll the clock back ~65 years. A series of debates in literature, editorials, lectures, magazine articles, film, etc., took place in the middle of the 20th century about media and rhetoric. The rising star of mid-century media studies, Marshall McLuhan, had introduced a controversial mosaic approach to writing which represented his view of a different, more complex kind of cognition arising through electronic media (aka, what we’d now call the Internet) in contrast to book publishing and pre-electronic literary culture. Without going into too much detail, alphabetic language leading into mechanistic print had filtered and shaped thought; tribal cultures which had relied on rich oral traditions got reshaped into nations, monetary economies, and so on. Then along came telegraph, radio, film, television, etc., which jumbled up learning and thought that had been based on centuries of print culture—especially television, which reintroduced some aspects of oral cultures. Another rising star in that field, Kenneth Burke, had introduced notions of new rhetoric in contrast to Aristotle’s rather old rhetoric. That theory was being argued circa the times of Mad Men, when the rules of mass persuasion through electronic media had changed dramatically. Meanwhile, the inventor of the “Western Civilization” program in American universities, Richard Lanham, was formulating ideas that led to a description of digital rhetoric. Even though those three—McLuhan, Burke, and Lanham—did not agree on much, taken together they did a fantastic job of envisioning and articulating challenges we’d face once the Internet became commonplace. As McLuhan described, he was quite worried about rushing headlong into the 21st century, while hindered by 19th century institutions, which are mired in 17th century thinking. In terms of decision-making in enterprise and government, that’s apt for where we stand today.

That brings us up to circa Mother of All Demos. For the previous five centuries, the dominant idea of media had been that books were written by authors, then discussed by critics, while the general public played a passive role as audience. From a modern/mid-century perspective, science—a slightly younger discipline, merely three centuries old at the time—was about determining facts. Authors of scientific works were supposed to refrain from using rhetoric in an Aristotelian sense as a tool of persuasion and instead stick to accumulating knowledge. Fine rules for 500 years, until they slammed into a brick wall. The introduction of 20th century media technologies helped power world wars, which led to nuclear weapons, not to mention a global debate about looming environmental crises. By mid-century, scientists were scrambling to leverage rhetoric to persuade the world to change course. Burke described a tension between science and rhetoric, especially in his 1966 book Language as Symbolic Action (see the “Terministic Screen” chapter). While science does indeed accumulate “facts” (read: ontology) the reality is that humans make sense of the world through stories. In other words, people learn through stories and use of new rhetoric as Burke described—ibid., a learning mindset mentioned above (Clue #3).

McLuhan tore down walls (read: silos) between authors, critics, audience—and meanwhile, reconfigured the “linear” nature of printed text. The best example of this is in his 1962 book The Gutenberg Galaxy: The Makings of Typographic Man. Hypertext would have died on the vine without this masterpiece. The “mosaic” literary/design approach (plus ample experimentation, thanks DARPA) opened the doors for “multimedia” leading up to our contemporary notions of data storytelling, hands-on tutorials, collaborative frameworks, and so on. In aggregate, the development of Jupyter notebooks was as much informed by McLuhan as it was by Don Knuth or Stephen Wolfram.

Lanham came along later, arguably providing a synthesis of Burke and McLuhan, plus others. He put forward notions about digital rhetoric, as theory underpinning online media. Lanham’s essays in the 1995 collection The Electronic Word: Democracy, Technology, and the Arts are a good starting point. Brenda Laurel, Betti Marenko, and others have pushed the field well beyond Lanham.

If you come from a technical background in the US you probably didn’t get much formal education in rhetoric, poetics, aesthetics, ethics, or other “-ics” from the fuzzier studies. Or you were taught that ancient Greeks covered all that stuff, now safely tucked away somewhere in Wikipedia. Or something.

I took this tangent for two reasons. First, aspiring data scientists would do well to read up on media studies. Interactive media plays a much larger role in our work than merely the presentations or blog posts. How quickly would data science have caught traction without GitHub or YouTube? How long would it have taken universities to retool academic programs to include data science without the Internet? How could industry have hired small armies of people fluent in data wrangling in Python, R, and so on, within the space of a decade without interwebs to accelerate that? For that matter, the intellectual aspects of data science have much in common with McLuhan’s mosaic. So there are three unusual books suggested for your reading list.

Secondly, because stakeholders. Stakeholders increasingly depend on results from data science teams.. While, as individuals, they may be industry experts in finance, insurance, pharma, automotive, or whatever vertical, many still operate with pre-1960 notions about science, rhetoric, media, etc., and apply 17th century thinking about data. I’d be willing to bet a large-ish hedge fund on that claim. Many question whether there’s any “science” involved in data science, and even if there is they’d balk at mingling analytics with tools for rhetoric. We call some of these people “executives” and others “board members” … at least in large enterprise. Now, suddenly they’re forced into uncomfortable positions of unbundling decision-making processes, sharing judgment with data science teams, and even sharing some of that judgment with automation, aka AI. As mentioned in “Episode 4” that disconnect was identified by WEF for Davos 2019 as one of the most critical issues w.r.t. ethics in AI.

Another point that Eric Colson didn’t make explicitly (so put me on the blame list for this one): practices such as Agile, Lean, Kanban, etc. can be appropriate for managing construction workers or for managing software engineering teams as if they were construction workers. However, these practices aren’t effective for managing science. Particularly in this case, where we’re talking about an area of science which arose from industry use cases not the halls of academe. Taking into consideration the works of McLuhan, Burke, Lanham, Laurel, Marenko, et al., mentioned above, data science emerged from a decidedly postmodern condition. For decision-makers (and company cultures) schooled in pre-1960 notions of modernity, that’ll most definitely freak out their #^@$!

Overall, the situation creates a slow-moving train wreck, where data science is in the conductor’s seat. Brace yourselves for impact.

Taking a pulse

Where will the points of impact tend to be in enterprise? Ben Lorica and I recently completed our third industry survey about “ABC” (AI, Big Data, Cloud) adoption in the enterprise. A free mini-book about the second survey, Evolving Data Infrastructure, just published.

Sans spoilers, there’s one histogram from our third survey (not published yet) which I feel compelled to describe. Which are the main bottlenecks, in order, that prevent enterprise firms from leveraging data and adopting AI technologies?

  1. Company culture does not yet recognize a need.
  2. Lack of data, or data quality issues (silos).
  3. Talent crunch for data science and data engineering roles.
  4. Difficulty identifying appropriate business use cases.
  5. Technical infrastructure challenges (tech debt).
  6. Legal concerns, risk, compliance.
  7. Problems with training ML models efficiently.
  8. Lack of reproducible workflows.

The first four items are the main hindrances in enterprise, where each received ~20% of “most evil” ranking. The lower three items are in single-digit territory of relative evilness. Another answer, a subset of Item #1, should have been “BoD doesn’t get it”—though we didn’t learn that in time to include in the survey questions. See the foregoing section.

Oddly enough, finance tends to have less trouble staffing the required talent, while healthcare tends to have less trouble identifying the business use cases. Highly regulated environments probably create some of that warping.

One interesting take-away from the survey: mature practices (firms with 5+ years experience deploying ML in production) have less difficulty with Item #1 company culture or Item #4 identifying business use cases. Read in another way, a firm won’t get a chance to become a mature ML practice until after it resolves those two issues.

Overall, this list describes what prevents organizations from obtaining ROI on data science projects. It’s the direct opposite of a process based on curiosity and learning described earlier. Why does this matter? To cut to the chase, refer back to a table called “Enterprise AI adoption by segment” in “Episode 2” towards the end of the article.

When it comes to data practices, we can describe three segments of enterprise organizations:

  • Unicorns – with first-mover advantage, eroding other firms’ verticals.
  • Adopters – gaining experience and mature practices for ML in production.
  • Laggards – stuck on company culture, tech debt, etc.

It takes 2-3 years for a firm in the laggard segment to execute a turn-around for its data infrastructure alone. Add a few quarters for staffing teams and getting them in place. Add another 18 months or so for deploying ML apps in production that yield ROI. I don’t have a metric to estimate the time it takes to change company culture because that’s what we call a very small dataset. Even at a fast pace, it probably takes 5-8 years to check off the eight-point list above if your firm falls into the laggard segment today.

Without delving into economic forecast techniques such as J curves, GPTs, etc., evidence from several directions points to a horizon about 4-5 years out where the unicorns (plus adopters who rise to join them) become effective enough at the data game to overwhelm the laggards buried in tech debt and cognitive/cultural struggles. In other words, 5-8 years would be beyond the “game over” mark for many organizations.

Another surprise from our surveys: right now (FY 2019) the top two segments are investing surprisingly large portions of their IT budgets into even more “ABC” adoption. One interpretation is that they’ve identified a key area for competitive differentiation and they’re moving in for the kill. Within a brief time, many laggards may not have enough runway to execute turnarounds. This isn’t doom and gloom, it’s an eventuality and it probably leads to better market efficiencies. But you don’t want to be on the wrong side of that history; you don’t want your firm’s epitaph to be the eight items listed above. Check out the surveys for more details:

Learning about learning

You may have noticed a few Clue #N points dropped along the way in this article. Finally, we pop the stack enough to reach my motivation for writing “Episode 6” in the first place. I was super-excited to read Eric Colson’s article about curiosity, especially since I’ve recently been studying about curiosity-driven learning in AI. It turns out, infants appear to have intrinsic motivations for learning. This research in cognitive psychology has a deep bench of experimental evidence through work with infants. Now it’s getting an empirical push from AI use cases in gaming. Here are some of the key papers to check out:

There are entirely too many concepts to describe here, ranging from progress drive to free-energy principle to environmental complexity. Oudeyer and Twomey separately describe processes for curiosity-driven exploration which have parallels in A* search and heuristic optimizations for gradient descent. You’ll need to read the papers. In brief, infants seek challenges for learning new skills which are not too hard, and not too easy. Curiosity is the mechanism which drives that learning and regulates it. These are intrinsic processes where infants can struggle with challenges—prior to being able to get help through communication. Effectively, an infant is constructing a gradient relative to maximize learning opportunities regarding an internal world model.

The practice of leading a data science team has much to learn from curiosity-driven learning. We often struggle with challenges where there is not much precedent, no outside communication source which can simply guide us toward a convenient solution. Instead, it’s a matter of applying science as a process. Eric’s risk management process described above is also sketched by multiple researchers in curiosity-driven learning.

There are other implications arising from research in curiosity-driven learning and its relationship to data science work. Some of that is more pedagogical in nature. For example, how scholasticism is probably a very bad approach for upskilling people in a multi-disciplinary practice that emerged from a unmistakably postmodern origins. Even so, I can’t help but notice ginormous doses of scholastic method in some of the data science bootcamps, MOOCs, edtech, etc. I also have a hunch that humanism, which displaced scholasticism in universities, is a peculiarly bad frame of reference for teaching people about collaborative decision-making that includes AI and automation. Full disclosure: I come from a different bent and tend to cast wary sidelong looks at excessive use of humanism. For deets, check out “Rethinking the Animate, Re-Animating Thought” by Tim Ingold:

Yet all science depends on observation, and all observation depends on participation — that is, on a close coupling, in perception and action, between the observer and those aspects of the world that are the focus of attention. If science is to be a coherent knowledge practice, it must be rebuilt on the foundation of openness rather than closure, engagement rather than detachment. And this means regaining the sense of astonishment that is so conspicuous by its absence from contemporary scientific work. Knowing must be reconnected with being, epistemology with ontology, thought with life. Thus has our rethinking of indigenous animism led us to propose the re-animation of our own, so-called “western” tradition of thought.

Curiosity, science, learning, animism, cognition—all rolled into a beautiful post-structural expression. <3 <3 <3

So anywho, there are heuristic models for curiosity-driven learning which are now confirmed by experimentation. Plus, these models are being simulated through the use of neural networks. This work is also getting used to improve reinforcement learning in gaming so that models can generalize more effectively. Here’s a super-interesting intersection of cognitive psychology and machine learning, where one discipline can help augment the other. Plus, it points toward process for data science.

How cool is that?

Out of the ~7 fields which led to different areas of machine learning, the neural networks part originated from an intersection of biology and cognitive psychology. A grad student named Humberto Maturana was on the McCulloch-Pitts team at MIT that originated artificial neural network research. Maturana later became famous for introducing definitions (Clue #4) for levels of cognition in cybernetics and control systems, namely autopoiesis. IMO, so much of thesecond-order cyberneticswork of the mid-20th century created blueprints for AI development which we’re just now approaching.

Assembling the Clue #N points above, one gets a strong sense that Eric Colson and Amy Heineike are sketching the outlines of practical AI applied as process for teams of people + machines. There’s the corollary to Agile which is needed for data science work: different kinds of risk management, feedback loops, intrinsic mechanisms for governance—“all the things” which Agile process provided for software engineering, albeit better suited for an unusually post-structural branch of science. Moreover, this sketch for emerging process is based on effective data science practices in the field.

Expect to hear more about convergence between experimental psychology and machine learning. Marshall McLuhan would’ve been ever so curious to learn more about this. Meanwhile, seek out opportunities to leverage curiosity and intrinsic motivations for data science teams. The opposite of that is probably the opposite of effective data science work, and the stakes are high.

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.

*

By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.