AI and ML 101
What is AI and ML?
Many use the terms artificial intelligence (AI) and machine learning (ML) interchangeably, yet they are distinct in important ways. It is critical to understand how AI, ML, deep learning (DL), and the related discipline of data science overlap and interact.
Artificial Intelligence (AI). AI is the wide-ranging idea of machines simulating human reasoning, thinking, and behavior.
Machine Learning (ML). Machine learning is a subset of AI, and not all AI is machine learning, although all machine learning is AI. In this area of AI, computer systems use data from either human programmatic input or the environment to improve their performance over time. ML allows computers to improve processes and experiences through repeated and continued learning.
What is Artificial intelligence?
Artificial intelligence (AI) generally refers to the ability of a computer or machine to use algorithms and processes to imitate the human mind’s capabilities and simulate human intelligence. This might include mimicking cognitive functions such as acting, comprehending, learning, perception, planning, sensing, and problem solving, all with human-like intelligence.
Machine learning and deep learning (DL) are subsets of AI, and DL is itself a subset of ML. AI systems may imitate examples, learn from experience, make decisions, perceive environments, categorize objects, and solve problems. Together, these abilities allow humans to accomplish things like saying hello to a friend or operating a vehicle, and machine systems that can accomplish enough of them in tandem can achieve some similar results.
Practical applications of AI include personal assistants such as Alexa and others that understand spoken language, recommendation engines such as those that deal out hits on Netflix or Spotify, self-driving vehicles, and web search engines.
There are four basic types or levels of AI. Two have already been achieved, while two are still theoretical.
How does Artificial Intelligence work?
How does artificial intelligence work? There are four types of AI from simplest to most advanced, and as a reminder the last two remain theoretical: reactive machines, limited memory, theory of mind and self-awareness.
Reactive machines accept some form of input and perform basic operations based on it. No “learning” takes place at this level of AI. Instead, the system is purely reactive, and never deviates from the particular task or set of tasks it is trained to do. These machines cannot function outside of a particular context, but can improve over time (like an automated chess playing program or spam filter).
These are technically less advanced forms of AI but they still have impressive abilities. Examples of reactive machines include arguably world’s the single best Go player, Google’s AlphaGo AI, IBM’s Deep Blue chess AI, and most recommendation engines.
Limited memory AI systems represent the beginning of machine learning. These systems store data about any decisions they make or actions they take as well as incoming data to conduct analysis. The overall goal is to improve the system over time, and it does demand limited memory.
These limited memory AIs that learn and improve over time include virtual voice assistants, self-driving vehicles, and chatbots, and are to date the most advanced AIs developed.
Of the two more advanced types of AI that remain theoretical for now, theory of mind will be the first to be achieved. Psychology provides the origin for “theory of mind” terminology, which is based on the AI concept that human emotions and thoughts can affect AI behavior.
At this level, AIs would start to meaningfully interact with humans, understanding our emotions and thoughts. In contrast to the more basic one-way relationship humans currently have with less advanced AIs, at the theory of mind level of AI, the human/AI relationship becomes reciprocal.
For many AI developers, AIs with self-awareness and human-level consciousness are the ultimate goal. For now, AIs that are self-aware and perceive themselves as beings similar to humans that exist in the world with similar emotions and desires are purely science fiction.
How is Artificial Intelligence programmed?
There are many possible languages for AI artificial intelligence programming projects. Here are some of the most common.
Python for AI and ML
Thanks to its simple syntax, Python is a runaway favorite among AI programming languages. Python is also an ideal choice for machine learning processes and customized AI solutions due to its easy integration with other languages, simple testing, rapid development, extensive open source community support, object-oriented programming, and ready-to-use libraries. If you want to spend more time on core structure and less time struggling, Python is a great option.
LISP
LISP, the oldest AI process language and a major factor behind much advancement in AI, is known for its precision and strong application for logical solutions. However, LISP is also known for having many flaws.
R
R is a high efficiency programming language for statistical processes. Its efficiency sets it apart from other languages and extended packages such as Gmodels, OneR, RODBC, and Tm coupled with a wide range of libraries offer robust support for machine learning processes and their use in solving complex issues.
Prolog
Prolog, or Programming In Logic, introduces its own separate world designed entirely by logic. Prolog designs AI solutions by determining the link between three important user factors: facts, rules, and the desired result. This offers a powerful but flexible approach that is also efficient for data structuring and backtracking.
C++
C++ has processing speed as its major advantage and in a broad sense it can assist complex automated solutions in running efficiently. Most of the TinyML field is built on C++, and hybrid coding approaches that apply C/C++ are often used in ML and other verticals.
JavaScript
More versatile than Java, JavaScript offers front and backend use, continuous developments, efficiency, multiple domain growth, and TensorFlow, a widely used framework for ML, including deep learning.
Java
Java is among the most popular programming languages worldwide for various processes, including AI processes. Java Virtual Machine (JVM) technology is among its greatest features, since you only need to compile the program on a single platform, making implementation easier and saving energy and time repeatedly compiling the program.
Haskell
Haskell is among the safest AI process programming languages, well-known for resolving errors during compilation and even before. Code reusability, built-in memory, and other features donate more time to the planning process. However, a disadvantage of Haskell is that it is not widely adopted.
Julia
Julia is best-known for numerical analysis. The dynamic type system is among Julia’s best features, enabling its use for any process. Other features include a built-in package manager, multiple dispatch support, Just-Ahead-Of-Time compiling, green threading, macro programming abilities, and suitability with C functions. Like Haskell, Julia is not widely adopted.
What Is An AI Model?
An AI model is an algorithm or program that recognizes patterns using a data set. With enough accurate information, it can make a prediction or reach a conclusion. This is particularly useful for addressing complex issues with minimum costs and high accuracy using massive amounts of data.
Popular AI Models
Some popular AI models—many of which are also ML algorithms—include:
What is an Agent in Artificial Intelligence?
The study of artificial intelligence is focused on rational agents and their behavior. A rational agent might be anything which makes decisions, including a person, machine, or program. An agent makes those decisions and carries out actions based on them to achieve the best outcome after considering precepts from the past and present.
An AI system is the agent, or sometimes multiple agents, and the environment around the agents that they act on. An agent is anything that can perceive the environment through sensors and/or act on the environment through actuators; anything else is environment.
What Is a Human Agent? A human agent has eyes, ears, and sensory organs, and other body parts that are both sensors and actuators, such as hands and legs.
What Is a Robotic Agent? A robotic agent has various motors (actuators) and sensors (such as cameras) that enable it to perceive the environment, react to it, and perform actions.
What Is a Software Agent in Artificial Intelligence? A software agent is programmed to do various tasks, such as take inputs, display files, or store data.
Benefits of Artificial Intelligence?
There are many artificial intelligence pros and cons, but we prefer to start with the benefits of artificial intelligence. AI can help businesses: reduce costs and time spent on repetitive tasks, accelerate revenue growth, glean more value from marketing technologies and more actionable insights from marketing data, shorten the sales cycle and generate greater ROI on campaigns, and predict consumer behavior and needs more accurately while creating better personalized consumer experiences at scale.
Automates Repetitive Tasks
AI excels at automating data-driven, repetitive tasks that humans burn excessive energy and time on when they could more effectively use it in countless other ways. AI performs complex manufacturing tasks, responds to emails, and completes other repetitive tasks for humans.
Improves Accuracy of Predictions
AI and ML learning technologies recognize big data patterns at scale and use them to make predictions. For example, AI predicts which plant equipment is ready for maintenance or replacement, which products you might like on Amazon, which Google Maps route is fastest, and which of your leads is ready to buy.
Improves Pattern Detection
AI detects patterns in words, numbers, and images much more effectively than humans, enabling you to unlock your phone securely with your face, for example, or suggesting which words in the text will come next or how to correct that typo. It’s actually AI pattern detection that allows self-driving cars to identify and avoid real world obstacles.
Increases GDP
AI delivers benefits beyond traditional software because it can improve on its own, offering compounding benefits over time. According to PwC, by 2030 AI will lift global GDP by 14%, contributing $15.7 trillion total to the world economy in the form of both increased consumption and increased productivity.
Enhances Worker Productivity, Business Value
AI enhances the value of individual workers and businesses. According to Gartner, AI augmentation will have generated approximately 6.2 billion hours of worker productivity and $2.9 trillion of business value around the world in 2021.
Disadvantages of Artificial Intelligence?
We’ve covered some of the advantages of artificial intelligence, but AI also has its disadvantages.
Demands High Amounts of Data
AI is only as good as its source data. To merely get started with AI, most companies need a minimum amount of clean, high-quality data. For some, this demands significant investment and work with internal data, which is among the great challenges of artificial intelligence for business. Some AI tools get around this by using third-party datasets, either from online sources or proprietary datasets owned or collected by the vendor, and applying proprietary algorithms to it.
Possible Failure
Bad AI is potentially really bad—so much so that the concept has inspired numerous books and movies. In reality, AI is not going to become self-aware and take over the human race. But AI can still make bad mistakes and poor decisions that have seriously negative effects.
AI systems are deployed across countless devices at scale. AI could hurt millions of people financially or physically or if it starts making harmful, bad, or just anomalous decisions. Self-driving cars are perhaps the best example of this, since a flaw in the AI of a line of self-driving cars could show up in millions of vehicles.
Explainability
Some AI systems offer little to no transparency for users into how the AI in the system makes decisions. These “black box” systems are alright when they work well. But if an AI takes a damaging action, makes a poor prediction, or makes a mistake and there’s no way to diagnose what went wrong, it may be a less ideal tool. This is the importance of explainability in AI tools.
Bias
What is bias in machine learning and AI? In statistics, bias refers to the difference between the expected value of the estimator and its actual value. Sometimes, in the context of AI, the concept of institutional bias arises–but this is a different concept. AI makes predictions and decisions based on data, but if that data contains institutional bias, whether conscious or unconscious, the AI system could discriminate against certain types or groups of people in its decisions.
Human Impact
AI will affect some jobs, although there is no way to accurately predict how many. Some people are concerned about long-term unemployment and widespread job loss. However, it also seems likely that AI will enhance and create more jobs than it eliminates.
Applications of Artificial Intelligence?
Artificial intelligence has an interesting array of applications even now. Examples of AI and ML include industrial robotics, personal assistant chat bots, and supply chain forecasting.
Industrial Robotics. Industrial robots can sense when they require maintenance to avoid costly downtime; monitor their own performance and accuracy; and typically can function in unknown environments.
Personal Assistants. Personal assistant tools such as Alexa by Amazon, Siri by Apple, Google Home by Google, and Cortana by Microsoft are interfaces for human-AI interaction. These personal assistants act like robotic concierges, answering questions and providing information, booking a hotel and adding the reservation and other events to calendars, sending emails or messages, scheduling meetings, and such.
Supply Chain Forecasting. Amazon and related e-commerce sites use data from your previous purchases, trends, information from other buyers of similar items, and additional details we just wouldn’t know to look for as humans to forecast what you might want to buy.
Artificial Intelligence for App Development?
AI is changing the face of mobile app development in several critical ways.
Personalized Content Strategy
Advanced AI algorithms play a significant role in suggesting personalized content that helps enterprises engage with their audience around their online offerings and services. AI accurately perceives target audience likes and dislikes, enabling a more engaging, relevant content strategy that is much more likely to receive improved traction. According to McKinsey, offering personalized content enables brands to drive an increase in sales.
AI-Powered Voice-Control
Among AI’s most notable impacts in the realm of mobile app development is voice assistant solutions such as Amazon Alexa, Apple Siri, Google Assistant, and others. AI-powered voice interactions and the merging of voice controls and mobile will continue to shape new user experiences and facilitate easier, more seamless interactions on mobile apps using everyday language.
Context-Aware, Precise Search Engine Output
Most mobile app users make use of voice and text inputs for searching content, and these input methods are being extended further with intelligent AI-powered predictions and suggestions. AI-powered visual search can also help optimize the scope of search engines.
AI-based Translation in Actual Time
Many translation apps exist, but most cannot function offline. AI can help handheld devices conduct real-time translation without internet connectivity much like a digital interpreter service.
More Powerful App Authentication
AI is poised to have a significant impact on user authentication and app security. The hallmark of successful cybercrime is penetrating app security ahead of the competition, so it is essential for mobile apps to prevent these threats in real-time and assess security threats in advance. AI can help monitor for irregularities and anomalies in user behavior that typically signal vulnerabilities and security threats.
What is Machine Learning?
Machine learning is a subfield of AI, defined broadly as the ability of a machine to imitate human behavior and intelligence. Artificial intelligence systems perform complex tasks using computer models that allow the systems to function in ways similar to how humans learn and solve problems.
These machines can understand a text written in natural language, recognize a visual scene, or perform an action in the physical world. They are ready to interact with and change their real-world environment. And they are equipped to learn from their experience without explicitly being programmed.
Machine learning arose from the need to design computer systems that could master more complex tasks. Traditional computer programming based on software demands precise and detailed instructions to cover all possible outcomes. This works well for basic, repetitive tasks, but for things like recognizing faces among thousands of people, the approach is too resource-consuming or impossible.
Machine learning starts with data, which might include photos of people or products, numbers, text, repair records or bank transactions, sales reports, or time series data from sensors, among other information. The machine learning model will require training data, and the more and higher quality the training data is, the better the program it can help produce.
Next, programmers select a machine learning model type and provide the data so the model can train itself to make predictions or find patterns. The human user can also help reinforce more accurate results over time, improving the model and changing its parameters.
How does Machine Learning work?
Machine learning systems can be descriptive, predictive, or prescriptive. Descriptive machine learning systems explain what happens using data; predictive systems predict what will happen using data; and prescriptive systems suggest action based on data.
There are a variety of machine learning algorithms, but the three major types of ML algorithms are supervised learning, unsupervised learning, and reinforcement learning. Each is important to understanding how machine learning works.
What is Supervised Machine Learning?
The most common type of machine learning today is supervised machine learning. Supervised machine learning prediction models grow more accurate over time through training on labeled data sets. For example, an algorithm would be trained with pictures of things labeled by humans, and the system would learn how to identify the things in the pictures on its own. Common applications of supervised machine learning include predictive analytics, media recommendation systems, and spam detection.
Unsupervised machine learning algorithms detect trends or patterns in unlabeled data. For example, an unsupervised ML program could identify customer segments making purchases in online sales data. Other unsupervised machine learning applications include anomaly detection and medical imaging.
Reinforcement machine learning is a complex algorithm that uses trial and error in the environment to train machines to establish a reward system and reinforce the system into taking the best action rather than a data set. Reinforcement learning helps the system learn over time what actions it should take and when it makes the right decisions and maximize positive rewards, such as when it automates stock trading, helps autonomous vehicles to drive, creates self-improving industrial robots, and trains models to play games.
Natural Language Processing (NLP)
In natural language processing (NLP), instead of the data that normally forms the basis of computer programs, machines learn to understand the natural language that humans speak and write. This allows machines to recognize, understand, translate, and respond to language, including by creating original text. NLP makes Alexa and her friends possible.
Neural Networks
Neural networks are a specific class of commonly-used machine learning algorithms. Modeled on the interconnected layers and processing nodes of the human brain, artificial neural networks are made up of connected nodes or cells, each processing inputs and sending its output to other neurons. Each node performs a different function, and each piece of labeled data moves through the cells, assessing information to arrive at its output.
What is Deep Learning (DL)?
As touched on above, deep learning (DL) is a subset of machine learning that attempts to eliminate the need for pre-processed data by emulating human neural networks. Deep learning algorithms learn without any human intervention at all, ingesting and analyzing massive quantities of unstructured data.
Deep learning networks are multi-layered neural networks that process extensive amounts of data and assess the appropriate “weight” of each piece of data. Deep learning algorithms, like other types of machine learning, can improve over time. Deep learning is modeled after the human brain like neural networks. As such, it has many practical machine learning applications, including autonomous vehicles, computer vision, chatbots, facial recognition, medical diagnostics, and natural language processing.
Generative Adversarial Networks
Generative Adversarial Networks (GANs) allow two neural networks to compete to achieve a goal. For example, two bots can compete to generate an image of a human, either to duplicate a real image or create a new one. One generates a photo and the other critiques it, and the process repeats at speed until an image that is indistinguishable is created.
Machine Learning Algorithms and their Applications
Here are some of the most popular types of machine learning models.
Linear Regression
Linear regression is a simple algorithm common even to introductory statistics classes, and is used frequently by data scientists thanks to its simplicity and interpretability.
Simple linear regression models the linear relationship between two variables by fitting the data with a linear equation. The primary goal of machine learning regression models is to make predictions by determining the strength of correlation between variables.
Logistic regression is a form of linear regression used for classification. Used before the field of generalized linear modeling (GLM) was formalized, logistic regression converts the typical linear regression response variables into desired binary output data. In other words, the independent variable y is a binary value rather than a discrete or continuous value such as “10 meters.” Logistic regression outputs these binary values using one of many link functions, such as the sigmoid or squashing function, logit, probit, and others.
Decision Trees
Although unsupervised decision tree models exist conceptually, in general decision tree models are a supervised learning algorithm that base their predictions of target variable value on multiple input variables. Decision trees model decisions and their consequences and resemble flowcharts. They can represent algorithms that only contain conditional statements such as “if,” “then,” and “else.” Decision trees are among the most popular machine learning algorithms due to their simplicity and interpretability.
Classification and Regression Tree (CART) models come in two varieties. Classification trees use target variables with a discrete set of values, while regression trees use target variables with continuous values.
Although all decision trees follow a top-down structure, the “best” way to split decision trees is measured by different metrics depending on the algorithm. Assess regression trees using variance reduction, and apply information gain, Gini Impurity, and Chi-Square to classification trees.
The predictive performance of models can be enhanced by using ensemble methods to combine decision trees together. By grouping multiple weak learners to form a stronger learner using an ensemble model, you get a sort of hive mind analogy for ML.
Two common ensemble methods are boosting and bootstrap aggregating or “bagging.” Examples of boosting models are XGBoost and AdaBoost. An implementation of bagging is the random forest model.
Random forest models are a type of decision tree, but they build “forests” of multiple trees by bootstrapping “random” samples. In machine learning and statistics, bootstrapping is a resampling technique that involves drawing samples from source data repeatedly, typically to estimate or infer results or parameters. The same data point may be included in the resampled dataset multiple times as part of the replacement sampling process.
Bagging can ultimately reduce variance, but random forest takes the additional step of splitting the nodes, or training the individual trees on random subsets of features. Thus each tree is relatively independent compared to more basic bagging. Random forest models have a few advantages over simple decision trees, including higher predictive performance, and improved bias-variance trade-offs, and the inherent ability to learn from a subset of features more quickly and model selection that results.
Simpler decision trees with a high interpretability capacity are best if the goal is to make predictions and knowing which features are more important than others is important. But for very large datasets where exact interpretability is less of an issue, not to mention the increased computational resources and training times required, random forest models can be a good solution.
Gradient Boosting Machines
Gradient boosting machines (GBMs) are a supervised ML technique used for classification and regression. GBM is an ensemble method that uses boosting, the building of a stronger modeling using a collective of weaker individual models. This can produce any type of ensemble, although decision trees called “gradient boosted trees” are the most common.
Convolutional Neural Networks
Next up on the machine learning algorithms list: Convolutional Neural Networks (ConvNet or CNN). These are a neural network algorithm used in computer vision and image recognition.
What is Applied Machine Learning?
Applied machine learning refers to applying machine learning (ML) to various data-related challenges. It is similar to applied mathematics, which applies the theories of pure mathematics to practical problems.
Benefits of Machine Learning
There are a host of ML benefits, but here are some of the most common advantages of machine learning techniques:
Continuous, Incremental Improvement
Machine learning algorithms can learn from new data, immediately improving the model’s efficiency and accuracy to make better decisions with subsequent training. For massive enterprises such as Amazon, the overwhelming quantity of training data collected means high accuracy from the recommendation engine.
Automation
Machine learning automates a host of decision-making tasks. This frees up developer time for more productive uses. For example, social media sentiment analysis can detect negative tweets and other social activity and deploy a first-level customer support chatbot to reply instantly.
Identification of Trends and Patterns
Machine learning algorithms can be used for various regression and classification problems to identify a range of patterns and trends with tremendous amounts of data.
Broad Applicability
Machine learning can benefit organizations across industries, helping them analyze patterns and trends from the past data, automate, cut costs, generate profits, and predict the future from education to finance and from medicine to defense. Applications like email spam filtering, GPS traffic tracking, spell check and correction, and text prediction are applicable in just about every industry.
Machine Learning Challenge
Explainability is an issue for machine learning just as it is for AI, explained above. And bias and unintended outcomes can be particularly trying for machine learning applications such as chatbots trained in natural language—because they learn racist, offensive language, too.
Machine learning algorithms can also create social problems or make them worse. For example, Facebook’s machine learning algorithms help show people content they will find more interesting and engaging based on their past behavior—but those same algorithms have also been shown to promote extreme content, conspiracy theories, and inaccurate content that leads to polarization.
Machine Learning Use Cases
Practically speaking, what is machine learning used for and what can machine learning do? There are so many use cases for machine learning, but here are some of the most obvious examples.
AI and ML in Banking and Finance
Machine learning algorithms can be used to enhance network security, enable financial monitoring that detects flags of fraud or money laundering, and help make investment predictions thanks to advanced market insights into specific market changes sooner than traditional investment models.
AI and ML in Manufacturing
In manufacturing, ML can be deployed for predictive maintenance, predictive quantity and yield, to create digital twins for instant diagnostics and other purposes, in smart manufacturing and generative design, in energy consumption forecasting, and in cognitive supply chain management.
AI and ML in Cyber Security
Machine learning techniques such as Natural Language Processing can empower real-time email monitoring and anomaly detection and defuse attacks such as phishing attempts. Machine learning algorithms can also effectively fight bots by using supervised machine learning techniques such as message variability, temporal patterns, and response rate to identify and classify bots. Convolutional Neural Networks (CNN) can control drive-by download attacks and reduce the adverse impacts of benign URLs.
Marketing
Machine learning techniques enhance customer journey optimization. ML algorithms help to develop data-driven, realistic recommendations by identifying and scoring each customer path and determining real-time points of interest along the customer journey.
Machine learning enables precision content curation, extracting content from online sources and customizing it based on customer preferences. Using natural language processing, deep learning, and clustering, among other ML techniques, teams can improve customer engagement and ROI.
AI and ML in Healthcare
Convolutional Neural Network (CNN) algorithms are deployed extensively in the healthcare sector to identify and classify images. CNNs have high accuracy rates of up to 95% in skin cancer detection, better than manual efforts and processes. Various machine learning and deep learning techniques can also predict and diagnose other medical conditions.
Machine learning techniques are useful in pandemic management. For example, ML algorithms may assist in timely prediction of Covid-19 mortality risk and related resource allocation and treatment. Support Vector Machine (SVM) algorithms can leverage both noninvasive clinical and invasive laboratory patient information for predictive modeling. Data such as age, blood oxygen levels, previous medical conditions, data from wearable devices, symptoms, and more can be given to the ML models to yield accurate predictions.
Electronic Health Records (EHR) can ease the administrative burden on healthcare workers, and collecting healthcare records in electronic form is easier with NLP tools. NLP tools can categorize words and phrases automatically to include in notes and EHRs after a patient visit. The tools can also generate graphs and other visual charts for physicians.
Retail
We’ve already mentioned recommendation engines at length, but there are many other uses of ML in retail. For example, online retailers use ML algorithms to determine dynamic pricing for products and services. Machine learning techniques can also effectively forecast demand for more efficient stocking.
These are a few of the more common machine learning application examples.
Relationship between AI and ML
Artificial intelligence is a field, and machine learning is a subset of AI. Deep learning is a subset of ML. But what are the practical similarities and differences between AI and ML? In other words, where are the points of overlap?
AI and ML tools and frameworks include: Auto ML, Caffe, CNTK, Google ML Kit, H20: Open Source AI Platform, Keras, MxNet, OpenNN, PyTorch, Scikit Learn, TensorFlow, and Theano.
Important of AI and ML
Data is an increasingly important business asset, and the amount of data generated and stored globally is increasing exponentially. Massive quantities of data are unmanageable and unusable without automated systems.
Artificial intelligence, machine learning, and deep learning allow organizations to extract insights and enhanced value from their data, delivering more advanced system capabilities and automating business tasks. AI/ML can help organizations of all sizes achieve measurable outcomes with the potential to transform all aspects of a business, including:
Leveraging AI and ML
Machine learning is at the heart of many modern business models, such as Google’s search engine and Netflix’s suggestions algorithm. Still others are working on how to best use machine learning.
Some of the best business uses of AI and ML right now include:
AI and ML Services by Provoke Solutions
We are specialists in using data analytics to enhance the modern worker experience. When legacy apps and outdated data models get in the way of agility and competitiveness, Provoke finds the simplest and most cost-effective way to drive change and enable managers to make data-driven decisions leveraging the power of AI and ML.
Some examples of the practices we use to gain insights are:
Let’s Talk.
Our Locations
Dallas, Texas
5752 Grandscape Blvd Suite 225,
The Colony, TX 75056
Auckland, New Zealand
Level 12, 2 Commerce Street
Auckland 1010
Wellington, New Zealand
Level 5, 15-17 Murphy Street
Thorndon, Wellington 6011
By clicking “Accept All Cookies,” you (i) consent to the storing of first- and third-party cookies on your device to enhance site navigation, analyze site usage, site performance, and assist in providing you with relevant content and (ii) you consent that your personal data will be transferred to countries outside of your jurisdiction, including the United States and New Zealand. Some of these countries may not have the same data protection safeguards as those in your jurisdiction. For more information see our Cookie Policy.
The Provoke team will reach out to shortly to schedule a time to connect.