The Birth of the Situation Designer

The Birth of the Situation Designer

The Birth of the Situation Designer

Blue Flower
Blue Flower
Blue Flower

Originally published on Medium

Imagine you are involved in a project to ensure workers in your company’s factory are wearing the correct Personal Protective Equipment (PPE), but the PPE they’ll be using has yet to be produced — it exists only as a design. You want to be able to detect this with a connected camera outside of the hazardous area that sends images to the cloud for inference (the resulting output of a Machine Learning or “ML” model). This has to function on day one — your worker’s safety is at stake.

Natural follow-up questions might include:

  • How might we train a model to recognize something in the physical world that currently exists purely as a design?

  • How might we define the responsibilities, skills, and considerations required to execute against this problem?

  • How does this sit within the overall design process?

This problem presents an opportunity to propose new design discipline — one that designs situations for prior artifacts to exist within.

To understand the differentiation of disciplines and the evolution of design artifacts, one can look to Design Thinking. More specifically, Professor Richard Buchanan’s 1992 paper “Wicked Problems in Design Thinking” where he describes the “Four Orders of Design”:

  1. Signs & Symbols: typically the domain of a Graphic Designer within a two-dimensional space

  2. Objects: useful artifacts of a three-dimensional nature; usually described by an Industrial Designer

  3. Activities & Organized Services: experiential design that considers time and space; commonly served by Service, UX, and Interaction Designers

  4. Complex Systems or Environments: poly-dimensional design that encompasses all prior orders and considers the influence of change on the system; more likely to involve urban planners, organizational, and other system designers.

Technology has reached a point where it is now possible to simulate complex Fourth Order design proposals with the artifacts produced by its ancestors. One can model a complex system or environment virtually to better understand its function: either to realize flaws or to transfer that understanding to a machine, which can — in turn — continually refine its own understanding. Two recent technologies provide great examples of Fourth Order design simulations:

Digital Twin

A digital twin is a virtual analog of a material object, system, process, or environment that allows one to simulate, modify, test, or monitor designs of the Fourth Order. In the cases that forces are included in the environment, this may also be referred to as a physical twin. An example may be a virtual representation of a factory line that is mechanically accurate and responds to simulated physics.

Machine Learning Model Training

Machine Learning (ML) models are computing systems that are capable of learning and adapting an understanding they themselves create without specific instruction. They create mathematical algorithms that analyze input data and make inferences about it based on detectable patterns. A virtualized design of the Fourth Order can be used to generate labeled data for an ML model to understand systems and environments that do not have sufficient real data. This data insufficiency can surface in a few ways, one being that the environment or system doesn’t have enough sense to build a model with observable data (e.g. photogrammetry, video, or image data). It may also be insufficient in the case that the design simply does not yet exist, as described in the scenario that begins this article.

For the sake of brevity, this article will focus on the ML example, but the interests and fundamental properties of both use cases are highly similar. Though an oversimplification, one could consider a Fourth Order design to be paused for the purposes of ML training and playing for the purposes of Digital Twins.

Considering the swim lanes for this theoretical PPE scenario, we can make a few reasonable assumptions about the roles and responsibilities in this process:

In short, a Design Researcher and a specialized Designer (in this case, an Industrial Designer) collaborate on the research and development of a prototype (for our scenario, PPE) up to the point that it needs to be simulated or visualized as influenced by a system prior to training. Neither of these roles are suitable for the studio artist-type work required to produce synthetic image data, nor is a studio artist necessarily familiar enough with Machine Learning to produce usable data.

This gap suggests the need for a transitional role: a practitioner with a skill set dedicated to describing the factors that influence a design’s virtual representation for the purposes of training and simulation. From this, the Situation Designer (SD) is born.

Earlier Methods

Situation Design, at its core, is a visual approach to data synthesis. All types of data synthesis rely on continuous data — measurements that can be infinitely varied over time. Data synthesis for ML in itself is not a new approach, but synthetic image data generation requires photorealistic, parameterized, visual and spatial design tools to create this data — tools that have historically been less accessible. Prior to the recent spike in tooling, synthetic image data looked like this:

If you’re familiar with Adobe Photoshop, imagine this as a batch process that applies various image manipulation techniques in a pseudo-random fashion. One can use this to take a conceptual product design (e.g. a new soda bottle label) and manipulate the image in a variety of ways to train a model that can determine if the design is present in many real-world environments. This is good but not great. In order to build a model with higher confidence for a situation that may not yet exist, the realism needs to be substantially improved.

Advancements in Design Technology

We can make some simple human inferences to understand the advancements in tooling by looking back at 2017, when the Amazon SageMaker service was launched as part of the AWS cloud offering. This service allows developers to create, train, and deploy models in many computing environments.

Curiously, Blender — the open source 3D design suite — started to spike in popularity around the same time following a series of substantial UI updates. This is strictly correlation and not necessarily causation, but it would be challenging not to speculate toward a trend. And, in fact, there is a trend.

Bespoke tools may be nascent, but they exist and are likely to become more popular and specialized. UnrealGT, a third-party synthetic image data plugin for Unreal Engine, was released in October 2019. In April 2021, spatial computing industry giant Unity somewhat quietly began promoting the use of their game engine for synthetic image data generation. NVIDIA released their enterprise Omniverse later that same year, enabling remote spatial collaboration for 3D design teams. These are the foundational tools an SD can use to bring the ‘perfect’ designs of any Order into a virtual representation of real world circumstances.

Circumstantial components of a Situation

The magic of this process is realized by the parameterization of environmental and situational factors that are otherwise very tedious to describe without procedural design: a rule-based system for describing 3D scenes. Situations may be influenced by (but are not limited to):

  • Objects: objects of interest, structures, people, things

  • Materials: exterior material finishes, natural distress

  • Lighting: natural, synthetic, temperature, brightness, placement

  • Camera: lens type, focal length, aperture, depth-of-field, orientation

  • Occlusion: foreground and background elements that influence visibility of the objects of interest

  • Orientation: proximity, location, rotation, and scale of the objects

  • Fidelity: image dimensions, model complexityPhysics: gravity, weight, turbulence, rigidity, softness, fluidity

Reflecting on a Theoretical Situation

Consider the early prompt in this article: you need to use technology to intelligently detect the presence of PPE that is not yet in use. To do so, you need to take an inventory of the conditions that will influence the resulting image so that you can construct a 3D scene rooted in reality. You need to have your virtual worker in uniform, in a variety of shapes and sizes, in a virtual factory, with virtual things surrounding them. Then, it needs to be optically convincing. You are now acting as a very intentional and specific type of spatial studio artist — you are designing situations.

Output

When an SD decides to render their Situation, they’ll likely generate thousands of images that are totally different from one frame to the next, though this is dependent on the frequency and amplitude of the parameters. If using a bespoke tool like Unity, the SD may also visualize the bounding boxes and label data in context.

Outcomes

Armed with a theoretical set of occurrences, captured in time frame-by-frame, machine learning engineers can use most of these images (some may be reserved for testing) to train a model that can understand a new reality. The applications range from the mundane, like ensuring the consumer packaged goods in the above example are placed according to their contracts with retailers, to the critical, like ensuring that aircraft inspections are being performed accurately.

Impact on Design

This may sound curiously similar to emerging tools for image generation like Stable Diffusion, Dall-E, or Midjourney. However, synthetic image data for training ML models needs to be consistent and specific — two problems that these tools struggle to overcome.

This article proposes that Situation Design is an emerging branch of Design Technology. As in more common design scenarios, a Design Technologist (DT, sometimes referred to as a UI or UX Engineer) can translate a design into something more functional. For example, when building a digital product, a UX Designer may work with a DT to realize the proposed design as a prototype to gain a better understanding of its function before realizing it through engineering. This process of realization is just as true for a Situation Designer.

Designers who are interested in synthetic image data generation might consider expanding their skills into spatial design, machine learning, and procedural design. With the business interests in artificial intelligence in mind, becoming a Situation Designer might be a prescient thing to consider sooner than later. Perhaps it’s time to design your next situation.

Free Learning Paths

Caveats

This title and role description is a proposal for classification within the field of design, but there may be nuances and existing roles that overlap. Please feel free to mention so in the comments.

Additionally, it is worth noting that learning does not always require a high volume of data. Depending on the purposes of the model, methods like zero-shot, one-shot, and few-shot learning can achieve good results and would likely not require a Situation Designer. It’s also worth noting that models trained with fully-synthetic image data may leave some precision to be desired and may not result in confidence scores that are always acceptable.

Again, this is just a proposal. I would love to know if this resonates, if I articulated something incorrectly, alternate or adjacent roles, etc. I would especially like to hear from readers who may already be doing this type of work. Thank you for reading.

Originally published on Medium

Imagine you are involved in a project to ensure workers in your company’s factory are wearing the correct Personal Protective Equipment (PPE), but the PPE they’ll be using has yet to be produced — it exists only as a design. You want to be able to detect this with a connected camera outside of the hazardous area that sends images to the cloud for inference (the resulting output of a Machine Learning or “ML” model). This has to function on day one — your worker’s safety is at stake.

Natural follow-up questions might include:

  • How might we train a model to recognize something in the physical world that currently exists purely as a design?

  • How might we define the responsibilities, skills, and considerations required to execute against this problem?

  • How does this sit within the overall design process?

This problem presents an opportunity to propose new design discipline — one that designs situations for prior artifacts to exist within.

To understand the differentiation of disciplines and the evolution of design artifacts, one can look to Design Thinking. More specifically, Professor Richard Buchanan’s 1992 paper “Wicked Problems in Design Thinking” where he describes the “Four Orders of Design”:

  1. Signs & Symbols: typically the domain of a Graphic Designer within a two-dimensional space

  2. Objects: useful artifacts of a three-dimensional nature; usually described by an Industrial Designer

  3. Activities & Organized Services: experiential design that considers time and space; commonly served by Service, UX, and Interaction Designers

  4. Complex Systems or Environments: poly-dimensional design that encompasses all prior orders and considers the influence of change on the system; more likely to involve urban planners, organizational, and other system designers.

Technology has reached a point where it is now possible to simulate complex Fourth Order design proposals with the artifacts produced by its ancestors. One can model a complex system or environment virtually to better understand its function: either to realize flaws or to transfer that understanding to a machine, which can — in turn — continually refine its own understanding. Two recent technologies provide great examples of Fourth Order design simulations:

Digital Twin

A digital twin is a virtual analog of a material object, system, process, or environment that allows one to simulate, modify, test, or monitor designs of the Fourth Order. In the cases that forces are included in the environment, this may also be referred to as a physical twin. An example may be a virtual representation of a factory line that is mechanically accurate and responds to simulated physics.

Machine Learning Model Training

Machine Learning (ML) models are computing systems that are capable of learning and adapting an understanding they themselves create without specific instruction. They create mathematical algorithms that analyze input data and make inferences about it based on detectable patterns. A virtualized design of the Fourth Order can be used to generate labeled data for an ML model to understand systems and environments that do not have sufficient real data. This data insufficiency can surface in a few ways, one being that the environment or system doesn’t have enough sense to build a model with observable data (e.g. photogrammetry, video, or image data). It may also be insufficient in the case that the design simply does not yet exist, as described in the scenario that begins this article.

For the sake of brevity, this article will focus on the ML example, but the interests and fundamental properties of both use cases are highly similar. Though an oversimplification, one could consider a Fourth Order design to be paused for the purposes of ML training and playing for the purposes of Digital Twins.

Considering the swim lanes for this theoretical PPE scenario, we can make a few reasonable assumptions about the roles and responsibilities in this process:

In short, a Design Researcher and a specialized Designer (in this case, an Industrial Designer) collaborate on the research and development of a prototype (for our scenario, PPE) up to the point that it needs to be simulated or visualized as influenced by a system prior to training. Neither of these roles are suitable for the studio artist-type work required to produce synthetic image data, nor is a studio artist necessarily familiar enough with Machine Learning to produce usable data.

This gap suggests the need for a transitional role: a practitioner with a skill set dedicated to describing the factors that influence a design’s virtual representation for the purposes of training and simulation. From this, the Situation Designer (SD) is born.

Earlier Methods

Situation Design, at its core, is a visual approach to data synthesis. All types of data synthesis rely on continuous data — measurements that can be infinitely varied over time. Data synthesis for ML in itself is not a new approach, but synthetic image data generation requires photorealistic, parameterized, visual and spatial design tools to create this data — tools that have historically been less accessible. Prior to the recent spike in tooling, synthetic image data looked like this:

If you’re familiar with Adobe Photoshop, imagine this as a batch process that applies various image manipulation techniques in a pseudo-random fashion. One can use this to take a conceptual product design (e.g. a new soda bottle label) and manipulate the image in a variety of ways to train a model that can determine if the design is present in many real-world environments. This is good but not great. In order to build a model with higher confidence for a situation that may not yet exist, the realism needs to be substantially improved.

Advancements in Design Technology

We can make some simple human inferences to understand the advancements in tooling by looking back at 2017, when the Amazon SageMaker service was launched as part of the AWS cloud offering. This service allows developers to create, train, and deploy models in many computing environments.

Curiously, Blender — the open source 3D design suite — started to spike in popularity around the same time following a series of substantial UI updates. This is strictly correlation and not necessarily causation, but it would be challenging not to speculate toward a trend. And, in fact, there is a trend.

Bespoke tools may be nascent, but they exist and are likely to become more popular and specialized. UnrealGT, a third-party synthetic image data plugin for Unreal Engine, was released in October 2019. In April 2021, spatial computing industry giant Unity somewhat quietly began promoting the use of their game engine for synthetic image data generation. NVIDIA released their enterprise Omniverse later that same year, enabling remote spatial collaboration for 3D design teams. These are the foundational tools an SD can use to bring the ‘perfect’ designs of any Order into a virtual representation of real world circumstances.

Circumstantial components of a Situation

The magic of this process is realized by the parameterization of environmental and situational factors that are otherwise very tedious to describe without procedural design: a rule-based system for describing 3D scenes. Situations may be influenced by (but are not limited to):

  • Objects: objects of interest, structures, people, things

  • Materials: exterior material finishes, natural distress

  • Lighting: natural, synthetic, temperature, brightness, placement

  • Camera: lens type, focal length, aperture, depth-of-field, orientation

  • Occlusion: foreground and background elements that influence visibility of the objects of interest

  • Orientation: proximity, location, rotation, and scale of the objects

  • Fidelity: image dimensions, model complexityPhysics: gravity, weight, turbulence, rigidity, softness, fluidity

Reflecting on a Theoretical Situation

Consider the early prompt in this article: you need to use technology to intelligently detect the presence of PPE that is not yet in use. To do so, you need to take an inventory of the conditions that will influence the resulting image so that you can construct a 3D scene rooted in reality. You need to have your virtual worker in uniform, in a variety of shapes and sizes, in a virtual factory, with virtual things surrounding them. Then, it needs to be optically convincing. You are now acting as a very intentional and specific type of spatial studio artist — you are designing situations.

Output

When an SD decides to render their Situation, they’ll likely generate thousands of images that are totally different from one frame to the next, though this is dependent on the frequency and amplitude of the parameters. If using a bespoke tool like Unity, the SD may also visualize the bounding boxes and label data in context.

Outcomes

Armed with a theoretical set of occurrences, captured in time frame-by-frame, machine learning engineers can use most of these images (some may be reserved for testing) to train a model that can understand a new reality. The applications range from the mundane, like ensuring the consumer packaged goods in the above example are placed according to their contracts with retailers, to the critical, like ensuring that aircraft inspections are being performed accurately.

Impact on Design

This may sound curiously similar to emerging tools for image generation like Stable Diffusion, Dall-E, or Midjourney. However, synthetic image data for training ML models needs to be consistent and specific — two problems that these tools struggle to overcome.

This article proposes that Situation Design is an emerging branch of Design Technology. As in more common design scenarios, a Design Technologist (DT, sometimes referred to as a UI or UX Engineer) can translate a design into something more functional. For example, when building a digital product, a UX Designer may work with a DT to realize the proposed design as a prototype to gain a better understanding of its function before realizing it through engineering. This process of realization is just as true for a Situation Designer.

Designers who are interested in synthetic image data generation might consider expanding their skills into spatial design, machine learning, and procedural design. With the business interests in artificial intelligence in mind, becoming a Situation Designer might be a prescient thing to consider sooner than later. Perhaps it’s time to design your next situation.

Free Learning Paths

Caveats

This title and role description is a proposal for classification within the field of design, but there may be nuances and existing roles that overlap. Please feel free to mention so in the comments.

Additionally, it is worth noting that learning does not always require a high volume of data. Depending on the purposes of the model, methods like zero-shot, one-shot, and few-shot learning can achieve good results and would likely not require a Situation Designer. It’s also worth noting that models trained with fully-synthetic image data may leave some precision to be desired and may not result in confidence scores that are always acceptable.

Again, this is just a proposal. I would love to know if this resonates, if I articulated something incorrectly, alternate or adjacent roles, etc. I would especially like to hear from readers who may already be doing this type of work. Thank you for reading.

Originally published on Medium

Imagine you are involved in a project to ensure workers in your company’s factory are wearing the correct Personal Protective Equipment (PPE), but the PPE they’ll be using has yet to be produced — it exists only as a design. You want to be able to detect this with a connected camera outside of the hazardous area that sends images to the cloud for inference (the resulting output of a Machine Learning or “ML” model). This has to function on day one — your worker’s safety is at stake.

Natural follow-up questions might include:

  • How might we train a model to recognize something in the physical world that currently exists purely as a design?

  • How might we define the responsibilities, skills, and considerations required to execute against this problem?

  • How does this sit within the overall design process?

This problem presents an opportunity to propose new design discipline — one that designs situations for prior artifacts to exist within.

To understand the differentiation of disciplines and the evolution of design artifacts, one can look to Design Thinking. More specifically, Professor Richard Buchanan’s 1992 paper “Wicked Problems in Design Thinking” where he describes the “Four Orders of Design”:

  1. Signs & Symbols: typically the domain of a Graphic Designer within a two-dimensional space

  2. Objects: useful artifacts of a three-dimensional nature; usually described by an Industrial Designer

  3. Activities & Organized Services: experiential design that considers time and space; commonly served by Service, UX, and Interaction Designers

  4. Complex Systems or Environments: poly-dimensional design that encompasses all prior orders and considers the influence of change on the system; more likely to involve urban planners, organizational, and other system designers.

Technology has reached a point where it is now possible to simulate complex Fourth Order design proposals with the artifacts produced by its ancestors. One can model a complex system or environment virtually to better understand its function: either to realize flaws or to transfer that understanding to a machine, which can — in turn — continually refine its own understanding. Two recent technologies provide great examples of Fourth Order design simulations:

Digital Twin

A digital twin is a virtual analog of a material object, system, process, or environment that allows one to simulate, modify, test, or monitor designs of the Fourth Order. In the cases that forces are included in the environment, this may also be referred to as a physical twin. An example may be a virtual representation of a factory line that is mechanically accurate and responds to simulated physics.

Machine Learning Model Training

Machine Learning (ML) models are computing systems that are capable of learning and adapting an understanding they themselves create without specific instruction. They create mathematical algorithms that analyze input data and make inferences about it based on detectable patterns. A virtualized design of the Fourth Order can be used to generate labeled data for an ML model to understand systems and environments that do not have sufficient real data. This data insufficiency can surface in a few ways, one being that the environment or system doesn’t have enough sense to build a model with observable data (e.g. photogrammetry, video, or image data). It may also be insufficient in the case that the design simply does not yet exist, as described in the scenario that begins this article.

For the sake of brevity, this article will focus on the ML example, but the interests and fundamental properties of both use cases are highly similar. Though an oversimplification, one could consider a Fourth Order design to be paused for the purposes of ML training and playing for the purposes of Digital Twins.

Considering the swim lanes for this theoretical PPE scenario, we can make a few reasonable assumptions about the roles and responsibilities in this process:

In short, a Design Researcher and a specialized Designer (in this case, an Industrial Designer) collaborate on the research and development of a prototype (for our scenario, PPE) up to the point that it needs to be simulated or visualized as influenced by a system prior to training. Neither of these roles are suitable for the studio artist-type work required to produce synthetic image data, nor is a studio artist necessarily familiar enough with Machine Learning to produce usable data.

This gap suggests the need for a transitional role: a practitioner with a skill set dedicated to describing the factors that influence a design’s virtual representation for the purposes of training and simulation. From this, the Situation Designer (SD) is born.

Earlier Methods

Situation Design, at its core, is a visual approach to data synthesis. All types of data synthesis rely on continuous data — measurements that can be infinitely varied over time. Data synthesis for ML in itself is not a new approach, but synthetic image data generation requires photorealistic, parameterized, visual and spatial design tools to create this data — tools that have historically been less accessible. Prior to the recent spike in tooling, synthetic image data looked like this:

If you’re familiar with Adobe Photoshop, imagine this as a batch process that applies various image manipulation techniques in a pseudo-random fashion. One can use this to take a conceptual product design (e.g. a new soda bottle label) and manipulate the image in a variety of ways to train a model that can determine if the design is present in many real-world environments. This is good but not great. In order to build a model with higher confidence for a situation that may not yet exist, the realism needs to be substantially improved.

Advancements in Design Technology

We can make some simple human inferences to understand the advancements in tooling by looking back at 2017, when the Amazon SageMaker service was launched as part of the AWS cloud offering. This service allows developers to create, train, and deploy models in many computing environments.

Curiously, Blender — the open source 3D design suite — started to spike in popularity around the same time following a series of substantial UI updates. This is strictly correlation and not necessarily causation, but it would be challenging not to speculate toward a trend. And, in fact, there is a trend.

Bespoke tools may be nascent, but they exist and are likely to become more popular and specialized. UnrealGT, a third-party synthetic image data plugin for Unreal Engine, was released in October 2019. In April 2021, spatial computing industry giant Unity somewhat quietly began promoting the use of their game engine for synthetic image data generation. NVIDIA released their enterprise Omniverse later that same year, enabling remote spatial collaboration for 3D design teams. These are the foundational tools an SD can use to bring the ‘perfect’ designs of any Order into a virtual representation of real world circumstances.

Circumstantial components of a Situation

The magic of this process is realized by the parameterization of environmental and situational factors that are otherwise very tedious to describe without procedural design: a rule-based system for describing 3D scenes. Situations may be influenced by (but are not limited to):

  • Objects: objects of interest, structures, people, things

  • Materials: exterior material finishes, natural distress

  • Lighting: natural, synthetic, temperature, brightness, placement

  • Camera: lens type, focal length, aperture, depth-of-field, orientation

  • Occlusion: foreground and background elements that influence visibility of the objects of interest

  • Orientation: proximity, location, rotation, and scale of the objects

  • Fidelity: image dimensions, model complexityPhysics: gravity, weight, turbulence, rigidity, softness, fluidity

Reflecting on a Theoretical Situation

Consider the early prompt in this article: you need to use technology to intelligently detect the presence of PPE that is not yet in use. To do so, you need to take an inventory of the conditions that will influence the resulting image so that you can construct a 3D scene rooted in reality. You need to have your virtual worker in uniform, in a variety of shapes and sizes, in a virtual factory, with virtual things surrounding them. Then, it needs to be optically convincing. You are now acting as a very intentional and specific type of spatial studio artist — you are designing situations.

Output

When an SD decides to render their Situation, they’ll likely generate thousands of images that are totally different from one frame to the next, though this is dependent on the frequency and amplitude of the parameters. If using a bespoke tool like Unity, the SD may also visualize the bounding boxes and label data in context.

Outcomes

Armed with a theoretical set of occurrences, captured in time frame-by-frame, machine learning engineers can use most of these images (some may be reserved for testing) to train a model that can understand a new reality. The applications range from the mundane, like ensuring the consumer packaged goods in the above example are placed according to their contracts with retailers, to the critical, like ensuring that aircraft inspections are being performed accurately.

Impact on Design

This may sound curiously similar to emerging tools for image generation like Stable Diffusion, Dall-E, or Midjourney. However, synthetic image data for training ML models needs to be consistent and specific — two problems that these tools struggle to overcome.

This article proposes that Situation Design is an emerging branch of Design Technology. As in more common design scenarios, a Design Technologist (DT, sometimes referred to as a UI or UX Engineer) can translate a design into something more functional. For example, when building a digital product, a UX Designer may work with a DT to realize the proposed design as a prototype to gain a better understanding of its function before realizing it through engineering. This process of realization is just as true for a Situation Designer.

Designers who are interested in synthetic image data generation might consider expanding their skills into spatial design, machine learning, and procedural design. With the business interests in artificial intelligence in mind, becoming a Situation Designer might be a prescient thing to consider sooner than later. Perhaps it’s time to design your next situation.

Free Learning Paths

Caveats

This title and role description is a proposal for classification within the field of design, but there may be nuances and existing roles that overlap. Please feel free to mention so in the comments.

Additionally, it is worth noting that learning does not always require a high volume of data. Depending on the purposes of the model, methods like zero-shot, one-shot, and few-shot learning can achieve good results and would likely not require a Situation Designer. It’s also worth noting that models trained with fully-synthetic image data may leave some precision to be desired and may not result in confidence scores that are always acceptable.

Again, this is just a proposal. I would love to know if this resonates, if I articulated something incorrectly, alternate or adjacent roles, etc. I would especially like to hear from readers who may already be doing this type of work. Thank you for reading.

Available for consultation

© 2025 Ian Latchmansingh

Available for consultation

© 2025 Ian Latchmansingh

Available for consultation

© 2025 Ian Latchmansingh

Available for consultation

© 2025 Ian Latchmansingh