So @Stephers alerted me to Joshua Epstein and his work in modeling artificial societies. The system they developed was called - oddly - Sugarscape. He wrote about it in “Growing Artificial Societies,” a book published by MIT Press with Brookings Institute in 1996. It is now widely used to model social system.

It is interesting to me that Joshua Epstein was embedded at Brookings - same folks who are now promoting interspecies currency that @leo brought to our attention. Note the voronoi pattern on the coats of the giraffe.

JL: Interspecies money is made possible today because of the collapse in the cost of gathering data in the wild thanks to low-cost sensors, drones, and other robots, near space observation, and especially eDNA sampling, coupled with huge strides in artificial intelligence that can interpret this data. Taken together, this combination permits us to base the allocation of interspecies money on actual conservation results. Face recognition of primates exceeds 95 percent accuracy, often advanced by algorithms used for recognizing individual pigs and sheep. A secure digital identity based on the distinctive markings of animals is well advanced as is the remote identification of whales at sea.

Once you can reliably identify individual species in their habitats, conservationists can self-organize around the new digital currency to greatly extend their knowledge. For example, the International Barcode of Life wants to identify a further 2 million species in the next decade. Observation of new species can be rewarded in L-marks exchangeable for U.S. dollars. Instead of mining numbers, as in Bitcoin, they will be mining life discovery."

For me that would appear to be added levels of sophistication related to making social interactions visible.

I also found a paper this morning on social science modeling that I was trying to link up with Ervin Laszlo’s assertions around hypercycles and new evolutionary phases. To me it seems they are trying to leverage simulation technologies and behavioral control / mind control to push the hive mind program.

“Voronoi patterns are used as a mean to explain tangible natural phenomena and non-tangible social phenomena.”

Everything in this visible world is woven by invisible threads of forces in geometry. This very belief raises curiosity in Architects and Planners as a parametric enquiry of form finding, governed by nature’s laws that connect both visible and invisible.

The discovery of sacred geometry and golden section gave way for different approach towards architecture and planning. These gave an extra precision on mimicking the geometries of nature to attain sustainability and eco-effective solutions; called bionic architecture, which in latest trend is called parametric architecture – everything as parameters which will affect evolution of form, not as one but deriving many possibilities. Such an observable geometric pattern in nature is Voronoi Pattern. It explains certain geometries observed in nature; in visible things which include leaves, wings of dragonfly, carapace of a turtle, honey comb, skin, it goes on; and in invisible things which includes personal space, service area, economic areas of similarity, and so on. These observations increase the curiosity to use it as a tool to attain sustainability in architecture and planning.

Sustainability is an impact network. Today people are alert with resource depletion and energy usage, as they are conscious of their survival, individually and socially. So, when society becomes sustainable the individual will, and vice versa.

About Voronoi Pattern It is an algorithm, used to divide a multi-dimensional space into sub-spaces with respect to central nodes by defining bisect vectors as center of sub-spaces. It is used for representing natural phenomena through multidimensional geometrical modelling. This algorithm has been used for structural and spatial derivations. Dated use of Voronoi diagram is on 1644 to Rene Descartes. In 1850, Lejeune Dirichlet used in his studies of quadratic forms used 2D and 3D charts of Voronoi. British doctor John Snow in 1854 used it to represent distribution of deceased people. Georgy Voronoi, in 1908 he conducted research on n-dimensional shapes and he defined them. It is also known Voronoi tessellation, Voronoi decomposition or Dirichlet tessellation.

It is an organizational phenomenon that sometimes named as ‘nature’s rule’.

The task of architecture is to create intelligent spaces. Voronoi patterns can be modelled by the use of parameters to build spatial relationships.

It is the invisible made visible.

1.3. NetLab by G_nome, London, 2006 This is 16 months research on Parametric Architecture using algorithms to generate a variety of solution forms in real contexts, to redefine the architect’s role by integrating design, analysis and production as a whole formal process which gives an iterative process on feedback, adjustment and optimization of design. Voronoi algorithms are used to express different social systems, scale and needs of the users.

So, this morning I woke up with this strong thought that I needed to revisit Hanford Nuclear plant and the metabolic ecosystem work done there around fish (salmon) by Odum / ORNL. Also PNW salmon were among the first genetically modified organisms.

" Understanding the Effects of Water Temperature at Hanford

Tracking radioactive contaminants wasn’t the only way DOE spurred the discipline of ecology. Like ORNL, the Hanford site combined a unique ecosystem and human influence. Its nine Cold War nuclear reactors released massive quantities of heated water into the Columbia River.

“We had the world’s greatest source of heated water going into any body of water,” said Charles Coutant, who worked as an ecologist at both ORNL and the Hanford site in Washington State (which evolved in part into PNNL).

Hanford scientists conducted comprehensive studies on how changes in temperature affect aquatic ecosystems. While the heated water didn’t appear to directly influence salmon spawning or mortality, scientists found affected salmon were more likely to be eaten by predators. These results demonstrated how changes to ecosystems often have complex and unpredictable effects."

On agent-based simulations and Voronoi patterns - and spatial analysis (as well as complexity and virtual environments) . . .

The characteristics of the emerging form, however, lie rather in the complexity how shifted spaces and parts are fitted together, than in a radical overall emergent geometry. Spatially as well as a structurally, the form moves from a simple modular repetitive system towards a more complex adaptive one, with interconnected parts which cannot stand alone but rather form an organic whole.

The design of architecture is the design of a highly complex, organised system. This research investigates the automation of various aspects of this process with the aid of machine learning and optimisation algorithms. The hypothesis is that knowledge gained through simulation of a system’s behaviour allows the computer to make the kinds of judgements or decisions that can be used to guide the design. Two situations are being studied: the manufacture of highly detailed space frame structures with specific load bearing or dynamic properties, and the planning of spaces to reflect given social relationships as quantified by space syntax techniques.

I previously revealed the work of mathematician and spatial analyst Dr. Hannah Fry at POM:

From my post in October 2021:

As noted in the quote above, the virtual pandemic was depicted by mathematician Dr. Hannah Fry in the 2018 documentary, “Contagion! The BBC Four Pandemic,” which can be viewed in its entirety here (1 hour, 14 minutes).

Essentially, the experiment was focused on building an interactive platform to monitor the “spread” of the “virus” throughout the village of Haslemere. This digital spatial dataset and data extraction came to be known as the “Haslemere social network,” that has subsequently been re-purposed as an interactive mathematical model pertaining to “SARS-CoV-2.”

What I find most striking about this COVID chicanery is what was revealed in this March 2020 article : “Haslemere is at the centre of the latest coronavirus outbreak in Britain – but some have teased that the affluent town should be ‘twinnedwith Wuhan’ (my emphasis).” POM readers may be familiar with my writing on digital twins, as well as the Sentient World Simulation (SWS). I remind readers that this Haslemere “coincidence” phenomenon was based on the narrative of a “digital experiment” that was conducted in Haslemere.

As computer algorithms increasingly control and decide our future, ‘Hello World’ is a reminder of a moment of dialogue between human and machine. Of an instant where the boundary between controller and controlled is virtually imperceptible. It marks the start of a partnership – a shared journey of possibilities, where one cannot exist without the other.* (my emphasis)

Is Hannah Fry a cryptographer decoding the very core basis of human behavior (via predictive modeling) – aiming to control humans – as neural networking bridges are engineered to sync humans with machines (with precision)? Hannah Fry | Mathematics | Innovation | Chartwell Speakers

So, I’m looking at this and wondering if these tesselated “crystalline” constructs are supposed to represent us as “agents” in the simulation. Digital twins. Is that what social media really is - tagging and refining “influence space?”

I am deeply appreciative for your kind sentiments, and for paying close attention here.

. . . Now, onto this generous offering of yours – RE: Joshua Epstein in March 2013 (in the context of this colorful “spike protein” involving digital simulation/predictive programming/computer modeling of pandemics/vaccine uptake):

We literally build artificial societies populated by cyber people on computers… We use these to project epidemics – infectious diseases like pandemic flu, and to design containment strategies like optimal vaccine distribution schemes or non-pharmaceutical interventions like school closures and other ways to contain these diseases… The modeling informs preparedness by identifying approaches that are effective, efficient, and equitable… You need the drugs where the bug will be, and that needs targeting and careful consideration of those communities at greatest risk… I thinks there’s been a widespread assumption that if a vaccine is available, everyone will dutifully take it. We know that’s not true from surveys of populations… I think…that trust is…very badly eroded and needs to be restored… Put the word public back into public trust… It’s important to get this back on the table, and to begin having the federal government provide antiviral drugs, careful communication of risks, and advice about preparedness.

In any case, Joshua Epstein is referenced often in the Swarm Intelligence book – for good reason. Here is a short bio: Joshua Epstein | Santa Fe Institute.

Joshua Epstein is Professor of Epidemiology in the NYU College of Global Public Health, and founding Director of the NYU Agent-Based Modeling Laboratory, with affiliated appointments at The Courant Institute of Mathematical Sciences, and the College of Arts & Sciences. Prior to joining NYU, he was Professor of Emergency Medicine at Johns Hopkins, and Director of the Center for Advanced Modeling in the Social, Behavior, and Health Sciences, with Joint appointments in Economics, Applied Mathematics, International Health, and Biostatistics. Before that, he was Senior Fellow in Economic Studies at the Brookings Institution and Director of the Center on Social and Economic Dynamics. His research interest has been modeling complex social dynamics using mathematical and computational methods, notably the method of Agent-Based Modeling in which he is a recognized pioneer. For this transformative innovation, he was awarded the NIH Director’s Pioneer Award in 2008, an Honorary Doctorate of Science from Amherst College in 2010, and was elected to the Society of Sigma XI in 2018. He has applied this method to the study of infectious diseases (e.g., Ebola, pandemic influenza, and smallpox), vector-borne diseases (e.g., zika), urban disaster preparedness, contagious violence, the evolution of norms, economic dynamics, computational archaeology, and the emergence of social classes, among many other topics. (my emphasis – I hope you can see the potential relevance to COVID and spike protein, and can place in the context of my June 2020 post)

Scot listed about twenty fields in which Voronoi diagrams are in common use (although often not by that name).

Anthropology and Archeology – Identify the parts of a region under the influence of different neolithic clans, chiefdoms, ceremonial centers, or hill forts.

Astronomy – Identify clusters of stars and clusters of galaxies (Here we saw what may be the earliest picture of a Voronoi diagram, drawn by Descartes in 1644, where the regions described the regions of gravitational influence of the sun and other stars.) @leo

Biology, Ecology, Forestry – Model and analyze plant competition ("Area potentially available to a tree", “Plant polygons”)

Cartography – Piece together satellite photographs into large “mosaic” maps

Crystallography and Chemistry – Study chemical properties of metallic sodium (“Wigner-Seitz regions”); Modelling alloy structures as sphere packings (“Domain of an atom”)

Finite Element Analysis – Generating finite element meshes which avoid small angles

Geography – Analyzing patterns of urban settlements

Geology – Estimation of ore reserves in a deposit using information obtained from bore holes; modelling crack patterns in basalt due to contraction on cooling

Geometric Modeling – Finding “good” triangulations of 3D surfaces

Marketing – Model market of US metropolitan areas; market area extending down to individual retail stores

Mathematics – Study of positive definite quadratic forms (“Dirichlet tesselation”, “Voronoi diagram”)

Metallurgy – Modelling “grain growth” in metal films

Meteorology – Estimate regional rainfall averages, given data at discrete rain gauges (“Thiessen polygons”)

Pattern Recognition – Find simple descriptors for shapes that extract 1D characterizations from 2D shapes (“Medial axis” or “skeleton” of a contour)

Physiology – Analysis of capillary distribution in cross-sections of muscle tissue to compute oxygen transport (“Capillary domains”)

Robotics – Path planning in the presence of obstacles

Statistics and Data Analysis – Analyze statistical clustering (“Natural neighbors” interpolation)

Zoology – Model and analyze the territories of animals

Abstract: First I will describe a class of networks with geometries based on a particular generalization of a Voronoi diagram. In [1710.04509] The cosmic spiderweb: equivalence of cosmic, architectural, and origami tessellations, we discussed a few networks in this class: the cosmic web (the large-scale spatial arrangement of matter in the universe); spiderwebs, structural-engineering networks that can be strung up to be either entirely in tension (like a spiderweb) or entirely in compression (like a tree); and origami tessellations. Other networks likely fall into this class, and I would love to hear what some SFI members think.

I had recalled seeing some distinct photos when I was investigating the ongoing (draining) Lake Mead story. Initially, I did not know what they actually signified. I now recognize that the story is very clearly encoding the Voronoi pattern:

Alexei Andreanov (Institute for Basic Science, Korea)

Abstract. Sphere packing is an old (optimization) problem at the intersection of mathematics, physics, and computer science. The formulation of the problem is very simple — what is the densest arrangement of spheres in a given Euclidean dimension. Despite that, the solution proved to be very hard to find, even approximately. The proofs for d=2,3 were only established in 20th century, while in higher dimensions our knowledge remains limited despite over a hundred years of research. The complexity of the problem originates from the combinatorial optimization nature and the failure of our intuition in high dimensions. While there is a large body of results from (pure) mathematics, I am going to present some new results in lattice sphere packing based on a theory by Voronoi and methods of statistical mechanics. I will also discuss the decorrelation principle, a recent conjecture, that has important immplications for the problem of packing.

Purpose:
Research Collaboration SFI Host: Yoav Kallus

Until 2017, I was an Omidyar Fellow at the Santa Fe Institute in Santa Fe, New Mexico. My research was in statistical physics, disordered, nonlinear, and adaptive systems, discrete geometry, packing problems, and soft matter. As of 2017, I am a quantitative research associate at Susquehanna International Group.

The earliest crystal structure models (a perhaps generous description) explained growth in a rudimentary way. In the 17th century, Johannes Kepler in Austria [4] and Robert Hooke [5] in England imagined crystals as packings of identical spheres and showed how their polyhedral shapes might arise therefrom. Two centuries later, the French abbé R. J. Häuy [6] stacked congruent bricks in various ways to show why and how crystals of the same species could have different external forms. Pyrite crystals, for example, grow as cubes, as octahedra, and sometimes as irregular pentagonal dodecahedra or “pyritohedra”.

The Voronoi diagrams also have many application across disciplines. These applications are not limited to the algorithm itself. Some of the examples will take a particular part of the Voronoi diagram (ex. The Voronoi pattern), while others extend beyond the two-dimensional support (3-D cell modeling).

o In the natural science field, Voronoi diagrams are used to model biological structures including cells and atom architectures.

o Voronoi diagrams are utilized in ecology to study the growth of forests and to predict forest wildfires.

o Voronoi diagrams are used to generate adaptative smoothing zones on images and to add fluxes on each one. It helps to produce constant signal-to-noise ratio on all images, mainly used in the field of astrophysics[3].

o Voronoi diagrams are used in computational physics to calculate profiles of an object with Shadowgraph and proton radiography in High energy density physics[3].

o In mining, the Voronoi diagrams can be used to estimate the mining value and to plan drillholes as they act as the site point in the diagram.

o In networking, Voronoi diagrams can be used to derive the capacity of wireless networks[3].

o Voronoi diagrams can be used to study insect or animal territorial behaviors. Namely the Bark Beetle was known to have a feed pattern similar to Voronoi regions so that they prevent intra-species competition for the same resource[1].

The applications listed above are only a glimpse of what Voronoi diagram’s capability. The algorithm has potential to be applied in science, engineering, art, or even humanities. With the help of computer modeling, we are able to take the Voronoi diagram into multi-dimension and further expand its usage in the near future.

Although we have already discovered so many areas that the Voronoi diagrams can be used for, there are still potential of this algorithm that is yet to be explored. The use of Voronoi diagram in the information science field or even cryptology; the investigation of efficiency of different construction methods and reverse engineering processes; the reason why the Voronoi pattern is considered beautiful and even implemented in countless architectures. Overall, the Voronoi diagram is a simple yet complicated concept that excels in its versatility of usage, with enormous value embedded to be unfolded.

Voro++ is written and maintained by Chris H. Rycroft, a visiting assistant professor in mathematics at UC Berkeley and the Lawrence Berkeley Laboratory. A preliminary version of the code was written at MIT during 2005–2006 to carry out local packing fraction computations during doctoral thesis work on flow in granular materials. After receiving feedback from a number of different research groups, the code was completely rewritten and extended during 2007–2008, to allow it to be publicly released during Summer 2008.

Just to backtrack ~ for some context on interrogating Voronoi patterns (in relation to Web3, simulations, black swan events, extreme weather events, parametric insurance, gaming/game theory, etc etc etc) . . .

Event Detection and Characterization in Big Data Streams

{Alternatively, a different approach may make use of an automated algorithm that constructs boundaries between clusters of points according to a clustering scheme, augmented by a Voronoispace partition algorithm based upon points at the centers of those clusters…}

See additional excerpts below:

Embodiments of the invention significantly overcome such deficiencies, and provide mechanisms and techniques whereby signatures of different types of events are extracted and then applied to the determination of the types of ongoing behavior of a system, or multiple systems. The types of events that can be characterized include but are not limited to adverse events, anomalous events, vulnerabilities, failures of a system, and security breaches.

Each vector of the plurality of vectors may corresponds to an individual in a population, each dimension of the plurality of dimensions may represent a corresponding health property of a plurality of health properties, and, for each reduced vector, the value of each reduced dimension may be a combined measure determined from two or more of the plurality of dimensions of the corresponding vector.

In another embodiment, the present disclosure provides a system including a computing device configured to obtain a vector space defined by a plurality of reduced vectors, each reduced vector being a dimensional reduction of a corresponding vector of a plurality of vectors, the corresponding vector containing data from a data stream. The system further includes a partitioning module installed on the computing device and configuring the computing device to partition the vector space into a plurality of regions, each region containing a subset of the reduced vectors and being associated with a characteristic determined from the data in the one or more vectors that correspond to the one or more reduced vectors in the subset of the region. The partitioning module may use a pre-set criterion or a pre-set algorithm to partition the vector space. The system may further include a dimensional reduction module installed on the computing device and configuring the computing device to map, with a dimensional reduction algorithm, each vector to a corresponding reduced vector of the plurality of reduced vectors to generate the vector space and a set of reduced dimensions for the vector space and the plurality of reduced vectors. The dimensional reduction algorithm may be selected from the group comprising: aggregation, correlation, multidimensional scaling, principal component analysis, Sammon map, clustering, projection onto a subspace, self-organizing map, and multiscale analysis.

The invention may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, diodes, look-up tables, etc., which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Other embodiments may employ program code, or code in combination with other circuit components.

In accordance with the practices of persons skilled in the art of computer programming, the present disclosure may be described herein with reference to symbolic representations of operations that may be performed by various computing components, modules, or devices. Such operations may be referred to as being computer-executed, computerized, software-implemented, or computer-implemented. It will be appreciated that operations that can be symbolically represented include the manipulation by the various microprocessor devices of electrical signals representing data bits at memory locations in the system memory, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to the data bits.

As non-limiting examples unless specifically indicated, any database or data store described herein may comprise a local database, online database, desktop database, server-side database, relational database, hierarchical database, network database, object database, object-relational database, associative database, concept-oriented database, entity-attribute-value database, multi-dimensional database, semi-structured database, star schema database, XML database, file, collection of files, spreadsheet, or other means of data storage located on a computer, client, server, or any other storage device known in the art or developed in the future. File systems for file or database storage may be any file system, including without limitation disk or shared disk, flash, tape, database, transactional, and network file systems, using UNIX, Linux, Mac OS X, Windows FAT or NTFS, FreeBSD, or any other operating system.

The approach of the invention to determining signaturesmakes use of a high level subdivision neuromorphic architecture to provide event detection and characterization. The system performing the detection of events is optimized to characterize the difference between types of events based upon dimensional reduction to partition the behavior of the system without a predetermined definition of those types, including such categories as normal and abnormal events. (In the prior art, we note, for example, that U.S. Pat. No. 6,177,885 makes essential use of an “established set” of parameters for normal and abnormal traffic patterns to detect anomalies in traffic, which the current approach does not rely upon as a general rule, though the present method may be augmented by special knowledge such as established parameters or expert knowledge where it is available and useful to do so.). Moreover, unlike the prior art of general unsupervised learning algorithms that partition pre-specified data sets, the present systems and methods partition the low dimensional space itself, so as to enable characterization of events that take place in the future as well as intermediate cases between normal and adverse that enable characterizing vulnerability and provide information about how to change the system to prevent adverse events. In each case, characterization does not require prior events that are very similar to the new event.

The constructed Specific Methods are particularly useful, and may be optimized, for detecting large scale events that affect the system as a whole. This approach recognizes that events that affect a small part of the system are not generally of interest to the function, health and well being of the system as a whole. In order for an adverse event to be of great importance it must typically affect more than a small part of the system.The focus is therefore on identifying the collective behaviors of the system that reflect vulnerabilities and adverse events. This is not exclusive of the possibility of detection of smaller scale events, but the approach is particularly well suited to detection of large scale events.

In another embodiment of the invention, the data stream contains at least one of several types of data or metadata including but not limited to internet based server activity, computing device activity, health related indicators of an individuals, physician or hospital medical visits of multiple individuals, power transmission levels in the power grid, multiple infrastructure sensors, multiple sensors associated with an industrial process, multiple sensors connected to an urban environment, social media, telephone communications, and internet communications.

In a specific embodiment of the invention the Specific Method identifies the sentiment of messages among individuals or as part of social media communications.

Embodiments of the invention determine distinct attributes of human messages, including affective ones.

In another embodiment of the method, the classified messages are aggregated and studied in space and time, or other categories, to characterize the emotional attributes of a population.

Example Embodiment: Insurance

Embodiments of the presently described General Method and Specific Method include the analysis of insurance risks for underwriters. The quantification of risk and risk categories for individuals, corporations, and other entities, for various forms of harm, injury, accident, death, financial loss, default on debt, or other adverse events, can be considered to be determined by a large number of parameters. While traditionally indicators of risk are obtained from survey and other specific data items, interest in big data analysis of risk has led to approaches to characterize risk from other forms of data, including mobility data.

The Specific Method obtained from applying the General Method in this case can be a new insurance measure system that integrates many of the usual tests as well as new types of data into combined measures that may better reflect risk. An individual’s condition at a given time is represented as a point in high-dimensional space. Dimensional reduction analysis enables us to represent this high-dimensional data in a much smaller number of dimensions that best capture the variation across individuals and appointments. Where a point lies in the resultant low-dimensional space is determined by a individual’s combined measure values at a given time.

Different areas of the combined measures tend to have different levels of risk, and may be associated with different types of events. Dimensional reduction analysis is used to identify the combinations of measures that best capture the variation in the population and how individual properties co-vary in conditions of risks across the population and over time.

Historical data and human curation can be used to identify the regions of space associated with different levels of risk. Counter to the conventional approach of using correlations, the current analysis uses patterns of co-occurring variation in multiple data elements to identify risk or risk categories. Following the dynamics of the values in this space points to the existence of changes in risk associated with individuals, corporations, or other potential or insured entities.

As a complex systems practitioner, Yaneer uses an innovative discipline of subterranean mathematical calculation to explain and predict global trends, such as food scarcity, ethnic cleansing, evolutionary biology, election outcomes, and pandemics, including COVID-19.

“It’s the narrative of math. It’s all rooted in an understanding of multiscale analysis,” says Yaneer, who turned to Nobel Prize Laureate mathematician Ken Wilson’s 1970s principles of “renormalization group” to carry out his examination of complex systems.

“Everybody studies in school how to use math. You have a set of equations, or you have an assumption. What can you prove? It’s set theory geometry,” says Yaneer. “When you know math really well, that’s really easy. The challenging part is making sure you have the right assumptions That’s the only thing that really matters. If you have the right assumption you can get the right conclusion.”

Redoubling his efforts to make a tangible difference through applied science, Yaneer founded the New England Complex Science Institute in 1996. The institute is a multidisciplinary consortium of top academics who apply systems modeling and research to solve real-world problems.

Yaneer has attracted an illustrious community of international experts in complex science and related fields from Harvard, MIT, Yale and elsewhere working together to solve problems through applied science and modelling.

Concurrent with my interrogation of Yaneer and his associates at NECSI (their complexity work echoes the work at the Santa Fe Institute) – and identifying the Voronoi space partition algorithm (for use in predicting events and human behaviors, along with human sentiment analysis), Alison had inquired about the significance of the SEA URCHIN. Henry Muller - Research, X-Rays, Intentional Mutations - #7 by Stephers

Both of these inquiries – essentially, occurring simultanously – led me to examining the Voronoi pattern as a gestalt potentially encompassing nearly every sector of society.

I get the sense that our exploration of the Voronoi and its applications to W0RLDBU1LD1NG is just beginning (paralleling our collaborative examination of MAGENTA which preceded this, as the two frequently seem to present in tandem).

In a Voronoi pattern , every point within a given region is closer to the “seed” inside that region than it is to any other point outside that region. Each point along a region’s edge is equidistant from the two nearest seeds. It’s seen in places ranging from cracked mud to giraffe skin to foamy bubbles. Voronoi patterns can help solve geometric problems like packing, strategic placements and patterns of growth.

The following article was one of my first introductory reads to Voronoi patterns, which I found to be very helpful:

Cholera outbreaks due to public water pumps. Suburbs serviced by hospitals. Formation of crystals. Coverage regions of phone towers. We can model or approximate all these phenomena and many, many more with a geometric structure called, among other names, a Voronoi tessellation.

The main other name for this object is the Dirichlet tessellation. Historically, Dirichlet beats Voronoi, but it seems wherever I look, the name Voronoi usually wins out, suggesting an example of Stigler’s law of eponymy. A notable exception is the R library spatstat that does actually call it a Dirichlet tessellation. Wikipedia calls it a Voronoi diagram. I’ve read that Descartes studied the object even earlier than Dirichlet, but Voronoi studied it in much more depth. At any rate, I will call it a Voronoi tessellation.

To form a Voronoi tessellation, consider a collection of points scattered on some space, like the plane, where it’s easier to picture things, especially when using a Euclidean metric. Now for each point in the collection, consider the surrounding region that is closer to that point than any other point in the collection. Each region forms a cell corresponding to the point. The union of all the sets covers the underlying space. That union of sets is the Voronoi tessellation.

The evolution of Voronoi cells, which start off as disks until they collide with each other. Source: Wikipedia.

Mathematicians have extensively studied Voronoi tessellations, particularly those based on Poisson point processes, forming a core subject in the field of stochastic geometry.

## Everyday Voronoi tesselations

Voronoi tesselations are just not interesting mathematical objects, as they arise in everyday situations. This piece from Scientific American website explains:

> Everyone uses Voronoi tessellations, even without realizing it. Individuals seeking the nearest café, urban planners determining service area for hospitals, and regional planners outlining school districts all consider Voronoi tessellations. Each café, school, or hospital is a site from which a Voronoi tessellation is generated. The cells represent ideal service areas for individual businesses, schools, or hospitals to minimize clientele transit time. Coffee drinkers, patients, or students inside a service area (that is, a cell) should live closer to their own café, hospital, or school (that is, their own cell’s site) than any other. Voronoi tessellations are ubiquitous, yet often invisible.

## Delaunay triangulation

A closely related object to the Voronoi tessellation is the Delaunay triangulation. For a given collection of points on some underlying mathematical space, a Delaunay triangulation is formed by connecting the points and creating triangles with the condition that for each point, no other point exists in the circumcircle of the corresponding triangle. (A circumcircle is a circle that passes through all three vertices of a triangle.)

An example of Delaunay triangulation with the original points in black and centrers (in red) of the corresponding circumcircles (in grey) of the Delaunay triangles. Source: Wikipedia.

The vertices of the the Delaunay triangular and Voronoi tessellation both form graphs, which turn out to be the dual graphs of each other.

A Delaunay triangulation (in black) and the corresponding Voronoi tessellation (in red) whose vertices are the centres of the circumcircles of the Delaunay triangles. Source: Wikipedia.

SugarScape is an interesting model developed by Joshua M. Epstein and Robert Axtell. It simulates the dynamics of multiple agents interacting with each other in a space with limited resource (sugar). The simulation of the interactions between the agents and the evolution of the individual agents are the main componenets in SugarScape and other similar systems. Many algorithms in artificial intelligence also have the simulation and evolution features.

Given that Python coding is referenced at this site, it is important to note that Python generates Voronoi tessellations:

From that same Sugarscape site (also circling back to Alison’s initial frame of reference):

Conway’s game of life

This is a small project for the purpose of learning d3.js. Conway’s game of life is a simple simulation on a 2D grid which can demonstrate a rich dynamics. Each grid point (or cell) has a status of live or dead. There are four intuitive rules to update the status of each cell:

Lonely cells will die. (Fewer than two live neighboring cells.) Crowded cells will die. (More than three live neighbors.) Other cells will live on to the next generation. Dead cell will become alive if there are exactly three neighbors.

The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed, live or dead; births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick .[nb 1] Each generation is a pure function of the preceding one. The rules continue to be applied repeatedly to create further generations.

Stanislaw Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model.[7] At the same time, John von Neumann, Ulam’s colleague at Los Alamos, was working on the problem of self-replicating systems.[8]: 1 Von Neumann’s initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model.[9][10] As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a “sea of parts” from which to build its replicant. Neumann wrote a paper entitled “The general and logical theory of automata” for the Hixon Symposiumin 1948.[8]: 1 Ulam was the one who suggested using a discrete system for creating a reductionist model of self-replication.[8]: 3 [11]: xxix Ulam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbors’ behaviors.[12]: 8 Thus was born the first system of cellular automata. Like Ulam’s lattice network, von Neumann’s cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a cellular automaton with a small neighborhood (only those cells that touch are neighbors; for von Neumann’s cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as the tessellation model, and is called a von Neumann universal constructor.[13]

RE: Python coding and Voronoi → Marc Andreessen (via Ping Fu, who was married to Herbert Edelsbrunner - see below) – which begins to tell a much bigger story . . .

Franz Aurenhammer,
Voronoi diagrams - a study of a fundamental geometric data structure,
ACM Computing Surveys,
Volume 23, Number 3, pages 345-405, September 1991.

Herbert Edelsbrunner,
Geometry and Topology for Mesh Generation,
Cambridge, 2001,
QA377.E36,
ISBN 0-521-79309-2.

Joseph O’Rourke,
Computational Geometry,
Cambridge University Press,
Second Edition, 1998,
QA448.D38.

You may have never heard of Ping Fu. But chances are her work has touched you in some way. Fu has spent decades envisioning new uses for computers. Now she thinks she’s really on to something: a technology that can scan three-dimensional objects, re-creating them first virtually, and then in the real world.

Fu once lead a research team at the National Center for Supercomputing Applications. One of the graduate students she hired was Mark Andreesen, who went on to create the Netscape Web browser. She also worked on the film Terminator 2, developing a way to bring to the screen the robot villain’s signature trick — melting into liquid metal and then morphing into a new shape.

On “digital shrink-wrapping”:

Fu came to American in 1982 and co-founded her company, Geomagic, in 1996. It is headquartered in a non-descript building in North Carolina’s Research Triangle Park. The company’s products enable designers and engineers to scan a 3-D object, capture the data from the scan and then use it to create highly accurate digital models.

Applications engineer Rob Black demonstrates how Geomagic’s technology — called digital shape sampling and processing — works. He picks up a small turbine blade from an aircraft engine, sets it on a circular plate, and rotates it under an optical scanner. Then he swivels his chair around and taps a few keys on a computer. An image pops onto the screen — a ghost-like image of the blade entirely made of dots, millions of points in virtual space. Next, Black connects the dots. With a few more keystrokes, he digitally “shrink wraps” the image, smoothing out the surface to generate an exact copy in cyberspace.

Steven Sol Skiena (born January 30, 1961) is a Computer Scientist and Distinguished Teaching Professor of Computer Science at Stony Brook University.[1] He is also Director of AI Institute at Stony Brook.

He was co-founder of General Sentiment, a social media and news analytics company, and served as Chief Science Officer from 2009 until it shut down in 2015.[2] His research interests include algorithm design and its applications to biology. Skiena is the author of several popular books in the fields of algorithms, programming, and mathematics. The Algorithm Design Manual is widely used as an undergraduate text in algorithms and within the tech industry for job interview preparation.[3] In 2001, Skiena was awarded the IEEE Computer Science and Engineering Undergraduate Teaching Award “for outstanding contributions to undergraduate education in the areas of algorithms and discrete mathematics and for influential textbook and software.”[4]

Skiena has worked on algorithmic problems in synthetic biology, and, in particular, issues of optimal gene design for a given protein under various constraints. In collaboration with virologist Eckard Wimmer, he has worked to computationally design synthetic viruses for use as attenuated vaccines.[5] Their Synthetic Attenuated Virus Engineering (SAVE) approach has been validated in flu[6] and experiments with other viruses are ongoing. A popular account of this work appears in Dennis Shasha and Cathy Lazare’s Natural Computing.[7]

Skiena played a role in the conception of the AppleiPad. In 1988, Skiena and his team won a competition run by Apple to design the Computer of the Year 2000.[8] Their design, a tablet featuring a touch screen, GPS, and wireless communications was similar in many regards to the iPad as released by Apple in 2010.[9]

In the 1980s he worked to develop computer network equipment in Styria, and is considered[according to whom?] to have helped pave the way for European internet technology. His later research in the area of knowledge management led him to found the company Hyperwave, which he chaired until its bankruptcy in 2005. Since 2006 he heads a new company, NewHyperG.