We've partnered with Code Like a Girl
We’re thrilled to partner with Code Like a Girl to hire a new engineer and two interns.
Diversity is important in creating high quality and representative AI. It just works better.
Coming to us through CLG, Katherine Dixey (our new engineer), Eleanore Kecskes-Judd and Wei Wang offer perspectives and skills from different backgrounds that will bring new eyes to the big problems we’re trying to solve.
Here’s an insight into our two interns:
Currently studying a Bachelor of Design at Deakin University, Eleanore is one of our UI/UX designers helping to refine solutions for clients so they can gain a better outcome for their problems.
Code Like a Girl has helped her get her foot in the door (specifically ours!) in the industry that she wanted to work in. So far Eleanore has quickly learnt that being a UX/UI engineer is not all about the wireframes and designs.
There is so much more that goes into a client project, which will keep her in good stead for her future career aspirations to build innovative tech and mobile solutions for her own clients.
In a unique skill set for a future designer, Eleanore is also a talented dancer, and will certainly keep our team light on their feet.
Having written her own algorithm on Natural Language Processing for word search, Wei Wang is a perfect fit for our Red Marble tech team.
With limited entry level options in her preferred field of data science and machine learning obtained at University of Melbourne, Code Like a Girl provided Wei a golden opportunity to connect with potential jobs. With a goal of becoming a future tech lead, Wei’s focused on becoming as technical as possible, and has already begun learning more on industrial coding style and the way to improve her code quality from our tech lead Andrew Ong.
What we didn’t realise when we hired Wei was that we’d also be getting a package deal with her adorable chinchilla cat Kuku who likes to sit in on every meeting.
Welcome aboard!
Introducing Tanveer
Welcome to the Red Marble AI family to our new AI Engagement Manager Tanveer Bal!
Tanveer joined us last month and we are excited to have him on the team. He’s an experienced founder working with AI driven companies. He'll be leading the way on connecting the dots between business and technology teams for our projects.
My role is to bridge the gap between the business and data science teams. I want to help organisations capitalise on the opportunities that lay hidden in their data, transforming organisational data into business value.
says Tanveer, who founded his own AI consultancy, and in previous roles worked on projects with the likes of Downer and Microsoft.
Tanveer has spent lockdown with his best mate, his gorgeous labrador Zac (welcome to the team too Zac!), and reading up on AI, having just finished one of our favourites: Competing in the Age of AI.
"I'll be chatting to companies about how we can partner and help them understand how AI in general, and Red Marble's experiment-driven approach in particular, can improve their productivity". You can find him on linkedin or reach out via email.
How a site conversation could change construction
Take a second to imagine the conversations on a construction site. The surprise? Those conversations could transform the construction sector and increase site margins.
Red Marble AI has been awarded a Victorian Government Technology Adoption and Innovation program grant. This grant will help fund our Construction Language Research Project.
Our first step was to hire Natural Language Processing specialist Haowen Tang. Haowen holds a Masters of Science (Computer Science) from University of Melbourne, specialising in Natural Language Processing. We’re also extending our collaboration with Melbourne Laureate Professor Tim Baldwin and University of Melbourne, world experts in this field, as we grow our team and extend our capability.
About the project
The Construction Language Research Project aims at understanding everyday site language within construction projects. It also looks at developing a construction language model that will support using data to unlock value and increase project margins.
Algorithms which understand the meaning behind language spoken or written within various project documents have huge potential to transform the construction sector. We believe using the data from those conversations will change the way the construction industry works.
Would you like to learn more about this project? Or understand the potential and opportunities that using artificial intelligence to leverage human language could have in your project? We would love to hear from you.
Construction in 2025: The AI Revolution is here
The AI revolution is here and the race is on to become the dominant force of construction in the future.
Today while the construction industry is awash with funding and fuelled by government spending, ultra-tight margins, difficulties retaining staff and cost overruns make it increasingly hard to be profitable.
Our whitepaper looks to the future, not the big blue sky wholesale change 50 years from now, but moreso how the first steps to solve these problems and revolutionise the industry are already underway.
In fact, many of the fundamental tools that will begin this wholesale change are here. In other sectors, data and AI have changed how industries operate and established a blueprint for industry takeover.
It is now up to the industry to take up the baton and run with it.
In our interviews with industry leaders, we are seeing pockets of excellence are already emerging. AI is being used effectively to drive improvements in several business areas. Data-centric technologies such as drones, Building Information Management (BIM), Internet of Things (IoT), and digital twins are supporting that.
Our Construction in 2025 whitepaper provides insights and the beginnings of a blueprint for change. Our 7 Steps to introducing AI are a guide to how construction industry leaders can get started.
We outline:
- Turbulent times: Construction’s big AI opportunity
- Getting race ready: Transformation lessons from industry titans
- Pockets of success: the sparks of an AI revolution
- A Blueprint for the industry: AI predictions for 2025
- 7 Steps to Introducing an AI experiment
One thing is clear: the prize for the firms who get this right is huge. Increased margin and profits, improved staff retention and emergence as a dominant player. And that’s just the beginning of what’s possible.
The race is on! Enjoy our analysis, and let us know if you've implemented any of these technologies in the past 12 months.
Does the Budget put innovation in Australia behind?
“Innovation is the most important thing to Australia going forward.”
I wholeheartedly agree with SpeeDx’s Dr Alison Todd on the patent box announced in last week’s Federal Budget and join her in urging for it to be expanded out of health and biotech.
The Budget has made both winners and losers of companies leading innovation in Australia.
The Government has lauded its own digital strategy and “investment” into the future for Australia. Prime Minister Scott Morrison said: “Australia has led the world with innovations like Wi-Fi, the bionic ear and a vaccine for cervical cancer. We want to see more innovation commercialised in Australia.”
The Opposition appears to agree with Labor leader Anthony Albanese making innovation in the form of support for training and a “Startup Year” for university students a key part of his budget reply to encourage new businesses and innovative thinking.
They are right to think this way. Innovation is not just a buzz word for tech boffins, it has practical implications for jobs, businesses and economic growth. It is an important conversation that we need to have right now to continue to educate our leaders.
We are still underinvesting in AI
But the budget itself only goes part of the way towards addressing this. The US, while a far larger country, is set to invest $6 billion into AI this year alone, by comparison in this budget Australia has increased its spend to $124 million over six year. This is just half of what the industry is calling for.
We need access to overseas talent
As the world emerges from the pandemic, we still lack access to overseas talent. Our borders are unlikely to open until next year and with recruitment in technical roles already difficult, this will increase the fight for talent.
But ESS will help us retain talent
The changes to employee share schemes, where tax will no longer be payable on shares when an employee leaves the business, will help encourage take up of this scheme.
This artificial taxing point in many cases forces the former employee to sell the shares to meet the tax liability and acts as a deterrent.
Patent and software write downs will help
There are more positives for the tech industry. The new patent and continued software write-downs will go some way to encouraging uptake of innovation and allow companies more ability to experiment.
As accountants have advised, the current rules have become outdated and are not keeping pace with what happens in reality, especially with software.
At the moment if you acquired a patent, you would need to claim the cost of that patent over 20 years.
Under the new rules, you will be able to self-assess the actual effective life of that patent and instead claim the cost over those years.
This same rule will also come into effect with in-house software in July 2023. What that will mean is that if it’s going to be obsolete in two years then you now claim over those two years.
What we really need
Kickstarting an “innovation boom” isn’t just a tech issue - this affects every industry from construction, medicine to finance.
Red Marble works in depth with industries including construction and infrastructure, so seeing strong investment into these sectors in the Federal Budget is encouraging.
Australia’s politicians have been cautious of overtly supporting the emerging tech sector and the optics of creating tech billionaires.
Our concern is politics getting in the way of the opportunity created by the current pandemic and its recovery on a global scale.
In its reply Labor has criticised the Government for making this a pandemic patch-up budget rather than a committed plan for Australia’s future.
Waiting years to invest in artificial intelligence, technology and supporting emerging companies in Australia may end up putting us further behind the rest of the world.
We need to open more conversations with our leaders so they can understand what is truly at stake in this budget and beyond. Aren’t we already far enough behind?
How to Start an AI Consulting Company from Scratch
Our CEO Dave Timm chats to Daniel Faggella from Emerj Artificial Intelligence Research for the launch of the AI Consulting podcast about Red Marble AI and the “wow factor” that pushed him to start a consulting company. Dave talks about developing your AI consulting business idea, finding your first clients, and landing your first projects. He also goes into how to find technical talent without hiring them full-time from the start. Do you want more keys to success in your own AI consulting career?
The podcast launched two weeks ago with episodes each day since then – the feedback so far has been excellent! Have a listen, we would love for you to share your experience!
What makes a good AI product? What elements do we look for to predict commercial success?
Human brains are amazing things, capable of things that computers just can’t do. AI is amazing too. Combining AI with human smarts elevates human ability to new levels. Our work focuses on using artificial intelligence to augment human capabilities and make workforces more productive.
At Red Marble, one of our offerings is to help clients develop and commercialise AI-powered products. We also develop and license our own products, always with an overarching focus on how AI can enhance human performance and workforce productivity.
Embedding AI into a product allows us to solve one specific business problem extremely well. And as the software learns, the solution improves over time.
So what makes a good AI product? What elements do we look for to predict commercial success?
How we define an AI-enabled product
- A set of code and algorithms solving a specific, clearly defined, repeatable business problem in an area of business value.
- Has defined inputs and outputs and can be deployed as a service
- Includes algorithms and machine learning with feedback methods to learn, and becomes more intelligent the more data it consumes.
- Based on unique and specific data which is usually not freely available (and more data creates a barrier to entry).
- Can be deployed for sufficient time for the model to learn and to fulfil its potential.
- Requires minimal (< 20%) configuration for a specific customer and doesn’t require large consulting effort to implement.
- Harnesses the power of technology to augment human capabilities
That last point is fundamental to what we do. We’re here to enhance human performance
and elevate human productivity - not replace people with machines.
It’s important to note here that a lot of the work in developing AI-based solutions is not machine learning; around 75% of the work relates to software engineering, data cleansing, data engineering and similar tasks. The actual model development, although often the most valuable part, is relatively small (Bastian Huang from Osara in the US describes the issue nicely here.). So it’s crucial that we’re factoring in those other tasks when we think about creating a new product.
There’s a couple of key considerations which are important to contemplate up front.
Start with the business challenge, not the technology
There are two ‘non negotiables’ before we develop a product:
- It needs to focus on a clearly-identified and specific business problem
- We need to be able to generalise the model across multiple customers
If those aren’t in place, the work should probably be considered as a software development exercise with intelligent algorithms part of the solution, but not a product offering.
We like a product to have a single ‘job to be done’, which keeps us focused on the business challenge, and how we solve it in a generalised way.
Work out how you will tackle the ‘long tail’
A big challenge of developing any product incorporating AI and machine learning is the ‘long tail’ - the large number of items which exist in small quantities, rather than the smaller number of popular items.
The concept is described nicely in this article:
“Supervised learning models tend to perform well on common inputs (i.e. the head of the distribution) but struggle where examples are sparse (the tail). Since the tail often makes up the majority of all inputs, ML developers end up in a loop – seemingly infinite, at times – collecting new data and retraining to account for edge cases.”
Many machine learning models can have a disproportionately long tail and the cost of training the model can rise exponentially, often making the project unviable without deep pockets. That’s why we prefer models where the ML does the heavy lifting, managing the bulk of cases, and allowing human team members to manage edge cases. That way we can still provide a great amount of value while keeping costs lower.
Our product journey
We’re currently working on a number of products, where we focus on incorporating intelligence into software, but always with the human at the centre. We view every opportunity through a human-centric lens. Being so attached to people may sound odd for an AI company, but it helps us (and you) reach our core goal: liberating humans to achieve more.
Want to know more about our AI-enabled product development approach and see if and how it can help you? Get in touch and let’s discuss your biggest challenges.
COVID-normal, powered by AI technology: the perfect storm
Futurists have long predicted that AI technology will leave a lot of workers without a job - especially those with specialised skills. As we emerge from a global pandemic and settle into ‘COVID normal’, the big question is: can AI help people get into work, rather than the other way around?
The answer is yes - however there is a ‘but’. It’s summarised well by a quote in this Wired article:
“You’ll be paid in the future based on how well you work with robots.”
There are many examples of humans and machines complementing each other, but it’s important to remember that we excel at different types of skills. Humans should focus on skills that they uniquely enjoy exercising, while AI technology handles the mundane tasks that don’t require human skills of judgement, creativity or empathy.
Humans and AI technology working together
International tech investor and startup adviser Anupam Rastogi describes the relationship between AI and humans as ‘human-machine symbiosis’.
In 2016 he wrote about the difference between artificial intelligence and intelligence augmentation, and mentioned examples from manufacturing, transport logistics, healthcare and agriculture where companies were leveraging advances in machine learning to augment human capabilities, enhance productivity or optimise use of resources.
Fast forward to today and many AI technologies have developed to help humans thrive, such as:
- Weather predictions helping farmers make real-time decisions on when to pick, plant and harvest.
- ‘Robot' vacuum cleaners that work away when you're not at home and free you up to do other things. This is a winner for me!
- The Australian born Swarm Farm Robotic is empowering farmers to automate their operations, even down to driving the tractor.
- And coming soon, you will see the use of prediction AI being used to help save lives by predicting natural disaster events.
How much is AI technology worth to economies?
A recent article in the Australian Financial Review predicts that the COVID-19 pandemic could triple the value of AI, as businesses rush to digitise many of their processes.
Krishan Sharma, technology journalist writes:
“A government-sponsored road map from CSIRO published at the end of 2019 found that the AI sector would be worth $315 billion to the Australian economy by 2028 and $22 trillion to the global economy by 2030.
However, experts such as KPMG's Partner-in-charge, James Mabbott, tells the Financial Review that both these figures could be as much as “1.5 to 3 times greater” after taking into account the increased levels of investment driven by the disruption caused by the pandemic.”
There’s no doubt that the pandemic has helped push many businesses out of their comfort zone and into a place where they’re more likely to consider digital options and artificial intelligence. How the industry handles this increased interest - and spend - is crucial.
How AI technology is affecting jobs - now and in the future
One factor that will have a big influence on the success of AI’s broader adoption is the people currently employed to integrate it into workplaces. Right now AI is a growing industry - and it will only continue to grow. Having the right people leading AI programs is essential for ensuring that AI operates in harmony with the human workers in the business to generate sustainable positive results.
Infosys’ 2018 ‘Leadership in the Age of AI’ report revealed a possible expertise issue:
“Two thirds of Australian organisations are having difficulties in finding suitable staff to lead AI technology integration and 75% of IT decision makers felt that the executive team in their organization needs formal training on the implications of AI technologies.”
Looking forward, it seems inevitable that there will be some disruption to employment - but does the end justify the means?
The Adelaide University’s 2018 (yes, a little dated, but still relevant!) report “The Impact of AI on the Future of Work and Workers” concludes:
“Occupations that can be replaced by AI and robots will be vulnerable. This has been true for the last 200 years of technological innovation and is hardly a surprise. There is not much call for typists anymore. Nor horse husbandry. These jobs have been taken over by machines. No doubt AI will substantially replace some occupations. But we have also learned from history that despite ever increasing automation from machines such as engines and computers, the total amount of employment has increased and average wealth has increased remarkably.
The capacity of countries to adapt to greater automation has required retraining and investment in education and research on a mass scale so as to build the capacity of the workforce to make best use of the new technologies developed.”
People before profit
The driving force behind AI - as with most other trends or technologies - is money.
As interest in artificial intelligence grows, there is a strong focus on reducing the time and resource required to create machine learning models that can generate the opportunities which will enable the workforces of the future.
It’s in this scenario that the AI industry and humans can really thrive, as long as we find a way to strike the right balance between AI doing our jobs for us, and helping us do our jobs better.
Then comes the next set of questions that need answers:
- Where will the accountability lie to reskill?
- Will it be up to individuals to head back to the classroom, or create working opportunities to develop the required capabilities?
- Or will industries lead the way by putting humans before short-term profits?
If you’d like to let us know your thoughts, or find out more about what AI technology could do for your business, we’d love to talk. Please get in touch.
How AI can augment prediction and human decision making
We spend much of our time here at Red Marble exploring ways that AI and machine learning can elevate human performance. One area of particular interest is how AI can augment prediction and human decision making.
But before we can design AI, we need to understand the differences between how software and human brains make decisions.
How do humans make decisions?
Humans make thousands of decisions every day; subconsciously combining inputs from multiple parts of the brain, combining real-time data with historical information from our memory and blending rational thought with emotional cues.
In “Thinking, Fast and Slow,” Kahneman and Tversky explain how fast, instinctive and emotional decisions are blended with others that are made logically and deliberately - all with an assessment of risk, probability and judgement based on experience.
This is a pretty sound way to make decisions - however, it takes time and the quality of the decisions can depend on outside factors (ie the person’s health or mental state).
How does machine learning make decisions?
Machine learning (ML) models aim to emulate elements of this decision making, but clearly some areas are more accessible than others.
ML reflects the ability for software to ‘fit’ a particular model to a set of data by applying specific weightings to different parts of the data (called ‘features’). It uses this model and applies the weightings to extrapolate that data in order to predict future events.
A machine can process vast amounts of historical data to make its predictions, with a defined probability, based on past data - but it can’t apply an emotional lens (yet) to those decisions, and it struggles where context changes.
Where machine learning works well
A nice application of ML is to model the rational human decision making process and to make those decisions at scale. Let me share an example.
We recently worked with a client who was making predictions about stock levels of spare parts for machinery. Were they holding enough in stock? When spare parts were ordered, would they arrive on time? They needed to know they would have the parts when required.
Looking at the “health” of each material made the assessment fairly intuitive and simple for the human. A track record of late deliveries from suppliers, highly variable stock levels in the warehouse, parts being used for breakdown (rather than planned maintenance) all lead to a fairly simple judgement by the human worker.
The challenge is applying that process across 800,000 materials every day. Clearly not something a human can do.
Applying an ML model here does the heavy lifting superbly and creates a list of priorities that the human can work through and apply judgement to.
We apply similar models in our AI-driven employee engagement and digital adoption work. The software can analyse many users and predict which information is most valuable to help each individual use their technology to its full potential and to succeed in their role. It models the human analysis, and applies it at scale.
Where do ML models fall short?
Machine learning takes historical data but can lack immediate context. Here’s another example.
We help one of our clients predict customer conversion for certain products and prices, helping them maximise margin. The model is trained on historical data, but within the context of COVID, historical data is not reflective of current buying behaviour. Companies need to adapt their models to make use of real-time context and the most recent data as it's coming in.
Machine learning can also fail to understand the behavioural aspects of decision making. Human decision making has lots of nuances; emotion, the time available, the paradox of choice as outlined by Barry Schwartz to name a few. AI models can be trained on aspects of these - and they will get better at this as technology improves - but for now will struggle to put it all together into cohesive reasoning.
When is emotional decision making a problem?
There are areas where prediction models can expose harsh realities which the human mind might wish to overlook.
One of our areas of work is predicting the success of corporate projects. By tracking the data throughout the project and applying a model trained on historical information, there are clear ways of predicting which projects will succeed, and which will fail to deliver the predicted business benefits. However, many projects are “pet projects” of a particular group or person; and the human mind may already have an ideal end state in mind which will bias the decision making.
An ‘emotionless’ ML model will have no issues exposing the hard facts: “This project only has a 30% chance of delivering the desired benefits” quickly cuts to the reality and can save companies countless time and expense.
Human decision making + AI: what’s the best way to combine them?
Humans are amazing, and the ability to make complex decisions pretty quickly is significantly ahead of our software abilities. However - add in the capability of AI to apply models rapidly, constantly and at huge scale to the capability of human decision making, and you have some very exciting possibilities.
What IS GPT-3? And why do you need to know?
At Red Marble, our core belief is that artificial intelligence will transform human performance.
And as part of our everyday work, we’re continually coming across (and creating) ways that AI is improving workforce productivity.
Our projects generally fall under five technical patterns of AI: prediction, recognition, hyper-personalisation, outlier detection and the one I’ll talk about here...
Conversation and Language AI
Broadly, conversation and language AI deals with language and speech. There are 3 aspects to this pattern:
- The ability to have a conversation with software, either via text or voice. Common examples of this include Alexa or Siri, but we’re seeing an increasing number of voice-based interfaces within enterprises.
- The ability to understand language and analyse it; for example, we recently worked on a project where we analyse text in work notes to understand if any contractual clauses may have been triggered.
- The ability to generate language - to create a natural language narrative based on input data, for example auto-generating a project status update narrative based on data collected.
There’s been a huge advance recently in natural language generation. It’s based on software called GPT-3 (Generative Pre-Trained Transformer, version 3) developed by California-based AI research centre OpenAI.
This technology was flagged in a research paper in May, and released for a private beta trial in July 2020.
What IS GPT-3?
GPT-3 is a ‘language model’, which means that it is a sophisticated text predictor.
A human ‘primes’ the model by giving it a chunk of text, and GPT-3 predicts the statistically most appropriate next piece of text. It then uses its output as the next round of input, and continues building upon itself, generating more text.
It’s special primarily because of its size. It’s the largest language model ever created, trained using around 175 billion variables (known as ‘parameters’ in this context). Essentially it’s been fed most of the internet to learn what text goes where in response to certain input primes.
What can GPT-3 do?
Some beta-testers have marvelled at what it can do - medical diagnoses, generating software code, creating excel functions on the fly, writing university essays and generating CVs just to name a few.
Others have rejoiced in posting examples showing that GPT-3, though sophisticated, remains easy to fool. After all, it has no common sense!
https://twitter.com/raphamilliere/status/1287047986233708546
https://twitter.com/an_open_mind/status/1284487376312709120
https://twitter.com/sama/status/1284922296348454913
Why is GPT-3 important?
For now this is an interesting technical experiment. The language model cannot be fine-tuned - yet. But it’s only a matter of time before industry-specific variants emerge, trained to skilfully generate excellent quality text in a specific domain.
Any industry where text based outputs or reports are generated - market research, web development, copywriting, medical diagnoses, property valuation, higher education to name a few - could be impacted.
Is GPT-3 intelligent?
This is the big question for us and cuts to the essence of what we think makes for great AI.
In our view, GPT-3 is great at mathematically modelling and predicting what words a human would expect to see next. But it has no internal representation of what those words actually mean. It lacks the ability to reason within its writing; it lacks “common sense”.
But it’s a great predictor of what a human might deem to be acceptable language on a particular topic, and - we believe - that means that through all the hype, it’s a legitimate and credible model. We’ll be keeping a keen eye on it!