Enterprise Software as a Service

We enable people and software systems to interact in the simplest possible way

Sometimes, as part of our work, we uncover software needs which are not well addressed by standard solutions from large vendors. In these scenarios S23M offers the capability, capacity and software tools to make possible what others consider impossible. We create enterprise grade SaaS solutions, at lower cost and higher quality than typical in-house solutions, and without proprietary lock-in.

All our SaaS products deliver a highly visual and intuitive user experience that respects human cognitive limits. Our solutions integrate with your enterprise systems and back-end services.

S23M's SaaS solutions are built on the open source Cell Platform, which allows us to rapidly formalise requirements and tacit domain knowledge in close collaboration with your team. The Cell Platform is unique in that at the core it offers a very small and clean implementation of the conceptual foundations of mathematics. The Cell Kernel only has a limited dependency on the Java Virtual Machine and no further dependencies on any other technologies. As a result spurious technological complexity is minimised.

The SaaS future of health informatics

While computers and the internet have provided us with new tools for exchanging data between healthcare professionals, timely access to all relevant information pertaining to a particular patient remains a challenge.

Patient history and treatment details tend to be scattered across several systems, and there is no clear-cut path for obtaining access to the latest and most trustworthy information. Data sharing must be seamless across organisation and system boundaries, and access privileges must be managed across multiple groups of stakeholders including patients, general practitioners, hospitals, and further organisations in the extended healthcare delivery ecosystem.

A federated approach to data interoperability and a trustworthy computational environment are prerequisites for improvements in interoperability. When S23M's Cell Platform technology is used to integrate distributed systems, data quality challenges can be reduced significantly, and as a result, the number of mistakes in diagnostics and patient care can be reduced correspondingly.

Human scale computing

Large software systems consist of over 10,000,000 lines of code. Integrating systems beyond company boundaries via web services is becoming increasingly common, leading to ultra-large scale systems involving billions of lines of code, and to a quasi-infinite number of usage scenarios that are never tested.

If a single line of code contains one decision, 10 million lines of code contain 10 million decisions. Re-writing 10 million lines of legacy software may resolve all existing errors — in the most optimistic scenario. At the same time it is an opportunity to introduce around 10 million new decisions that can potentially be wrong. Given that 1 unintended error per 500 lines of code is considered pretty good, that’s effectively a guarantee of 20,000 new sources of errors in a haystack of 10 million lines.

Human cognitive limits are increasingly becoming a primary concern in the design of human institutions and technologies, in much the same way that human scale physical dimensions and characteristics have shaped the discipline of ergonomics. Human scale computing can be understood as the elaboration of the role of cognitive characteristics of humans within ergonomics, with the objective of improving communication and collaboration across three categories of interactions:

  1. between humans,
  2. between humans and software systems,
  3. and between software systems.

Unless software systems are constructed from the ground up based on a new set of principles, the quality of software will not improve in any substantial way.

The problem with state-of-the-art-software is that its readability and understandability by humans decreases very quickly over time, due to poor notations and inadequate mechanisms for modularising specifications. The complexity of our systems is steadily going up, and so are the high-impact risks.

The opportunity for improvement of software quality lies in the introduction of techniques that allow software specifications to be simplified (modularised) and to be made easier to understand (via improved visual notations) as part of maintenance activities, and as part of gaining new insights by observing and analysing system run-time behaviour.

Cognitive blind spots

Construction of formal models is no longer the exclusive domain of mathematicians, physical scientists, and engineers. The growing use of digital communication and mathematical techniques for data analysis have led to a set of cognitive blind spots within organisations and within human society:

  1. Blind use of mathematical formalisms — magical rituals that lead to a lack of understanding of algorithm convergence criteria and limits of applicability, to suboptimal results, and to invalid conclusions.
  2. Blind use of second-hand data — unvalidated inputs that open the door for poor measurements and questionable sampling techniques.
  3. Blind use of implicit assumptions — unvalidated assumptions that enable the use of speculative causal relationships, simplistic assumptions about human nature, and create a platform for ideological bias.
  4. Blind use of second-hand algorithms — unvalidated software that can produce invalid results, contradictions, and unexpected error conditions.
  5. Blind use of terminology — implicit semantic integration between mathematical formalisms, data, assumptions and software that facilitates further bias and spurious complexity.
  6. Blind use of numbers — numbers with no sanity checks can enable order of magnitude mistakes and obvious data patterns to remain undetected.

Large and fast flowing data streams from very large networks of devices and sensors have popularised the discipline of data science, which is mostly practiced within large organisations, within constraints dictated by business imperatives, and mostly without external and independent supervision.

The most worrying aspect of corporate data science in the absence of adequate transparency is the power that organisations can wield over the interpretation of data, and the corresponding lack of power of those that produce and share the data.

Mathematical understanding and numerical literacy is becoming increasingly important. Transparency, including open science, open data, and open source software are emerging as essential tools for independent oversight of cognitive blind spots.

Trustworthy computing

Collaboration, trustworthiness and reliability are amongst the biggest challenges in a digitally networked world, in particular in the healthcare sector. Trustworthiness can only increase if the difference between expectations and experience decreases. Information and communication channels must be reliable at all times.

Full transparency of data access and analytical algorithms is an important prerequisite. Patients need the ability to monitor and audit all access to their health data, and hospitals need to become an integral part of the digital health data ecosystem.

Transparency must also extend to secondary use cases, so that individual rights and well-being can be prioritised over commercial third party interests.

Disclaimer