The rules used to be that data architectures had to be designed independently of the technologies and products; first, design the data architecture and then select the right products. This was achievable because many products were reasonably interchangeable. But is that still possible? In recent years we have been confronted with an unremitting stream of technologies for processing, analyzing and storing data, such as Hadoop, NoSQL, NewSQL, GPU-databases, Spark and Kafka. These technologies have a major impact on data processing architectures, such as data warehouses and streaming applications. But most importantly, many of these products have very unique internal architectures and directly enforce certain data architectures. So, can we still develop a technology independent data architecture? In this session the potential influence of the new technologies on data architectures is explained.
Organizations worldwide are facing the challenge of effectively analyzing their exponentially growing data stores. Most data warehouses were designed before the big data explosion, and struggle to support modern workloads. To make due, many companies are cutting down on their data pipelines, severely limiting the productivity of data professionals.
This session will explore the case studies of three organizations who used the power of GPU to broaden their queries, analyze significantly more data, and extract previously unobtainable insights.
Read lessHave you ever been disappointed with the results of traditional data requirements gathering, especially for BI and data analytics? Ever wished you could ‘cut to the chase’ and somehow model the data directly with the people who know it and want to use it. However, that’s not a realistic alternative, is it? Business people don’t do data modeling! But what if that wasn’t the case?
In this lively session Lawrence Corr shares his favourite collaborative modeling techniques – popularized in books such as ‘Business Model Generation’ and ‘Agile Data Warehouse Design’ – for successfully engaging stakeholders using BEAM (Business Event Analysis and Modeling) and the Business Model Canvas for value-driven BI requirements gathering and star schema design. Learn how visual thinking, narrative, 7Ws and lots of Post-it ™ notes can get your stakeholders thinking dimensionally and capturing their own data requirements with agility.
This session will cover:
Cloud-based services, in-memory databases, and massive parallel (ML) database applications are predominant in the BI marketing hype nowadays.
But, what has really changed over the last decade in the DBMS products being offered? What players are riding the technology curve successfully? Should we worry about the impact of new hardware such as GPU and non-volatile memory? And, should we rely on programmers to reinvent the wheel for each and every database interaction? What are the hot technologies brewing in the kitchen of database companies?
A few topics we will cover in more detail:
More enterprises are seeking to transform themselves into data-driven, digitally based organisations. Many have recognised that this will not be solely achieved by acquiring new technologies and tools. Instead they are aware that becoming data-driven requires a holistic transformation of existing business models, involving culture change, process redesign and re-engineering, and a step change in data management capabilities.
To deliver this holistic transformation, creating and delivering a coherent and overarching data strategy is essential. Becoming data-driven requires a plan which spells out what an organisation must do to achieve its data transformational goals. A data strategy can be critical in answering questions such as: How ready are we to become data-driven? What data do we need to focus on, now and in the future? What problems and opportunities should we tackle first and why? What part does business intelligence and data warehousing have to play in a data strategy? How do we assess a data strategy’s success?
This session will outline how to produce a data strategy and supporting roadmap, and how to ensure that it becomes a living and agile blueprint for change rather than a statement of aspiration.
This session will cover:
Most analytic modelers wait until after they’ve built a model to consider deployment. Doing so practically ensures project failure. Their motivations are typically sincere but misplaced. In many cases, analysts want to first ensure that there is something worth deploying. However, there are very specific design issues that must be resolved before meaningful data exploration, data preparation and modeling can begin. The most obvious of many considerations to address ahead of modeling is whether senior management truly desires a deployed model. Perhaps the perceived purpose of the model is insight and not deployment at all. There is a myth that a model that manages to provide insight will also have the characteristics desirable in a deployed model. It is simply not true. No one benefits from this lack of foresight and communication. This session will convey imperative preparatory considerations to arrive at accountable, deployable and adoptable projects and Keith will share carefully chosen project design case studies and how deployment is a critical design consideration.
Jan Veldsink (Lead Artificial Intelligence and Cognitive Technologies at Rabobank) will explain how to get the organization right for Machine Learning projects. In large organizations the access to and the use of all the right and relevant data can be challenging. In this presentation Jan will explain how to overcome the problems that arrise, amd how to organize the development cycle, from development to test to deployment, and beyond Agile. Also he will show how he has used BigML and how the audience can fit BigML in their strategy. As humans we also learn from examples so in this talk he will show some of the showcases or real projects in the financial crime area.
Read lessMost people will agree that data warehousing and business intelligence projects take too long to deliver tangible results. Often by the time a solution is in place, the business needs have changed. With all the talk about Agile development methods like SCRUM and Extreme Programming, the question arises as to how these approaches can be used to deliver data warehouse and business intelligence projects faster. This presentation will look at the 12 principles behind the Agile Manifesto and see how they might be applied in the context of a data warehouse project. The goal is to determine a method or methods to get a more rapid (2-4 weeks) delivery of portions of an enterprise data warehouse architecture. Real world examples with metrics will be discussed.
The rules used to be that data architectures had to be designed independently of the technologies and products; first, design the data architecture and then select the right products. This was achievable because many products were reasonably interchangeable. But is that still possible? In recent years we have been confronted with an unremitting stream of technologies for processing, analyzing and storing data, such as Hadoop, NoSQL, NewSQL, GPU-databases, Spark and Kafka. These technologies have a major impact on data processing architectures, such as data warehouses and streaming applications. But most importantly, many of these products have very unique internal architectures and directly enforce certain data architectures. So, can we still develop a technology independent data architecture? In this session the potential influence of the new technologies on data architectures is explained.
Organizations worldwide are facing the challenge of effectively analyzing their exponentially growing data stores. Most data warehouses were designed before the big data explosion, and struggle to support modern workloads. To make due, many companies are cutting down on their data pipelines, severely limiting the productivity of data professionals.
This session will explore the case studies of three organizations who used the power of GPU to broaden their queries, analyze significantly more data, and extract previously unobtainable insights.
Read lessHave you ever been disappointed with the results of traditional data requirements gathering, especially for BI and data analytics? Ever wished you could ‘cut to the chase’ and somehow model the data directly with the people who know it and want to use it. However, that’s not a realistic alternative, is it? Business people don’t do data modeling! But what if that wasn’t the case?
In this lively session Lawrence Corr shares his favourite collaborative modeling techniques – popularized in books such as ‘Business Model Generation’ and ‘Agile Data Warehouse Design’ – for successfully engaging stakeholders using BEAM (Business Event Analysis and Modeling) and the Business Model Canvas for value-driven BI requirements gathering and star schema design. Learn how visual thinking, narrative, 7Ws and lots of Post-it ™ notes can get your stakeholders thinking dimensionally and capturing their own data requirements with agility.
This session will cover:
Cloud-based services, in-memory databases, and massive parallel (ML) database applications are predominant in the BI marketing hype nowadays.
But, what has really changed over the last decade in the DBMS products being offered? What players are riding the technology curve successfully? Should we worry about the impact of new hardware such as GPU and non-volatile memory? And, should we rely on programmers to reinvent the wheel for each and every database interaction? What are the hot technologies brewing in the kitchen of database companies?
A few topics we will cover in more detail:
More enterprises are seeking to transform themselves into data-driven, digitally based organisations. Many have recognised that this will not be solely achieved by acquiring new technologies and tools. Instead they are aware that becoming data-driven requires a holistic transformation of existing business models, involving culture change, process redesign and re-engineering, and a step change in data management capabilities.
To deliver this holistic transformation, creating and delivering a coherent and overarching data strategy is essential. Becoming data-driven requires a plan which spells out what an organisation must do to achieve its data transformational goals. A data strategy can be critical in answering questions such as: How ready are we to become data-driven? What data do we need to focus on, now and in the future? What problems and opportunities should we tackle first and why? What part does business intelligence and data warehousing have to play in a data strategy? How do we assess a data strategy’s success?
This session will outline how to produce a data strategy and supporting roadmap, and how to ensure that it becomes a living and agile blueprint for change rather than a statement of aspiration.
This session will cover:
Most analytic modelers wait until after they’ve built a model to consider deployment. Doing so practically ensures project failure. Their motivations are typically sincere but misplaced. In many cases, analysts want to first ensure that there is something worth deploying. However, there are very specific design issues that must be resolved before meaningful data exploration, data preparation and modeling can begin. The most obvious of many considerations to address ahead of modeling is whether senior management truly desires a deployed model. Perhaps the perceived purpose of the model is insight and not deployment at all. There is a myth that a model that manages to provide insight will also have the characteristics desirable in a deployed model. It is simply not true. No one benefits from this lack of foresight and communication. This session will convey imperative preparatory considerations to arrive at accountable, deployable and adoptable projects and Keith will share carefully chosen project design case studies and how deployment is a critical design consideration.
Jan Veldsink (Lead Artificial Intelligence and Cognitive Technologies at Rabobank) will explain how to get the organization right for Machine Learning projects. In large organizations the access to and the use of all the right and relevant data can be challenging. In this presentation Jan will explain how to overcome the problems that arrise, amd how to organize the development cycle, from development to test to deployment, and beyond Agile. Also he will show how he has used BigML and how the audience can fit BigML in their strategy. As humans we also learn from examples so in this talk he will show some of the showcases or real projects in the financial crime area.
Read lessMost people will agree that data warehousing and business intelligence projects take too long to deliver tangible results. Often by the time a solution is in place, the business needs have changed. With all the talk about Agile development methods like SCRUM and Extreme Programming, the question arises as to how these approaches can be used to deliver data warehouse and business intelligence projects faster. This presentation will look at the 12 principles behind the Agile Manifesto and see how they might be applied in the context of a data warehouse project. The goal is to determine a method or methods to get a more rapid (2-4 weeks) delivery of portions of an enterprise data warehouse architecture. Real world examples with metrics will be discussed.
Many who work within organizations that are in the early stages of their digital transformation are surprised when an accurate model — built with good intentions and capable of producing measurable benefit to the organization — faces organizational resistance. No veteran modeler is surprised by this because all projects face some organizational resistance to some degree. This predictable and eminently manageable problem simply requires attention during the project’s design phase. Proper design will minimize resistance and most projects will proceed to their natural conclusion – deployed models that provide measurable and purposeful benefit to the organization. Keith will share carefully chosen case studies based upon real world projects that reveal why organizational resistance was a problem and how it was addressed.
Business teams are raising the bar on Business Intelligence and Datawarehouse support. BI competence centers and data managers have to respond to expanding requirements: offer more data, more insight, maximal quality and accuracy, ensuring appropriate governance, etc. All to create guidance for enhancing their business. The promise of new technologies such as Artificial Intelligence is attracting increased business interest and stimulates data-driven innovation and accelerated development of smarter applications. Data Scientist teams grow, and can take over the lead from BI competence centers.
– How should such developments, which make sense from a business improvement perspective, be supported by data management activity?
– How to control privacy and create an effective data governance strategy?
– It becomes more challenging to design appropriate data warehouses, BI functionality and data access control when business interests change frequently and application development evolves rapidly, driven by “AI initiatives”.
This session will review techniques and technology for effective (meta) data management and smarter BI for widening data landscapes. We will elaborate the details for an appropriate governance approach supporting advanced Business Intelligence and Insight Exploration functions.
Read lessThe world of data warehousing has changed! With the advent of Big Data, Streaming Data, IoT, and The Cloud, what is a modern data management professional to do? It may seem to be a very different world with different concepts, terms, and techniques. Or is it? Lots of people still talk about having a data warehouse or several data marts across their organization. But what does that really mean today? How about the Corporate Information Factory (CIF), the Data Vault, an Operational Data Store (ODS), or just star schemas? Where do they fit now (or do they)? And now we have the Extended Data Warehouse (XDW) as well. How do all these things help us bring value and data-based decisions to our organizations? Where do Big Data and the Cloud fit? Is there a coherent architecture we can define? This talk will endeavor to cut through the hype and the buzzword bingo to help you figure out what part of this is helpful. I will discuss what I have seen in the real world (working and not working!) and a bit of where I think we are going and need to go in today and beyond.
Al jaren bestaat de wereld van Business Intelligence (BI) uit het bouwen van rapporten en dashboards. De BI-wereld om ons heen verandert echter snel. (Statistical) Analytics worden meer en meer ingezet, elke student krijgt gedegen R-training en het gebruik van data verplaatst zich van IT naar business. Maar zijn we wel klaar voor deze nieuwe werkwijze? Zijn we in staat om de nieuw verkregen inzichten te delen? En kunnen we echt het onderbuikgevoel van het management veranderen?
Tijdens deze presentatie gaan we in op deze veranderende wereld. We gaan in op hoe we het data-driven storytelling proces kunnen toepassen binnen BI-projecten, welke rollen zijn hiervoor nodig en u krijgt handvatten om nieuw verkregen inzichten te communiceren via storytelling.
• Inzicht in het Data-driven storytelling process
• Visuele data exploratie
• Organisatorische wijzigingen
• Communiceren via Infographics
• Combineren van data, visualisatie en een verhaal.
The close links between data quality and business intelligence & data warehousing (BI/DW) have long been recognised. Their relationship is symbiotic. Robust data quality is a keystone for successful BI/DW; BI/DW can highlight data shortcomings and drive the need for better data quality. A key driver for the invention of data warehouses was that they would improve the integrity of the data they store and process.
Despite this close bond between these data disciplines, their marriage has not always been a successful one. Our industry is littered with failed BI/DW projects, with an inability to tackle and resolve underlying data quality issues often cited as a primary reason for failure. Today many analytics and data science projects are also failing to meet their goals for the same reason.
Why has the history of BI/DW been plagued with an inability to build and sustain the solid data quality foundation it needs? This presentation tackles these issues and suggests how BI/DW and data quality can and must support each other. The Ancient Greeks understood this. We must do the same.
This session will address:
Met de komst van cloud computing is het mogelijk geworden om data sneller te verwerken en infrastructuur mee te laten schalen met de benodigde opslag en cpu capaciteit. Waar voorheen batch computing de norm was en grote datawarehouses werden ontwikkeld, zien we een transitie naar data lakes en real-time verwerking. Eerst kwam de lambda architectuur die naast de batch processing een streaming processing layer toevoegde. En sinds 2014 zien we dat de kappa architectuur de batch processing layer uit de lambda architectuur helemaal weglaat.
Deze presentatie gaat in op Kappa architecturen. Wat zijn de voor- en nadelen van het verwerken van data langs deze architectuur ten opzichte van de oude batch verwerking of de tussentijdse lambda architectuur? Dit vraagstuk zal worden behandeld aan de hand van ervaringen bij KPN met een product gebaseerd op een Kappa architectuur: de Data Services Hub.
Centraal bij de beantwoording staan de aspecten die tegenwoordig worden toebedeeld aan innovatieve technologieën: homogenisatie en ontkoppeling, modulariteit, connectiviteit, programmeerbaarheid en het kunnen profiteren van ‘gebruikers’ sporen.
Following up on its successful predecessor we are happy to announce the release of Quipu 4.0. We’re taking things a step further by introducing the next level in data management automation using patterns as guiding principle. Making data warehouse automation, data migration, big data applications and similar projects much faster and easier. Together with customers we can develop and add new building blocks fast, putting customer requirements first. In this presentation we highlight our vision and invite you to be part of our development initiative.
Read lessWe have known public data marketplaces for a long time. These are environments that provide all kinds of data products that can be purchased or used. In recent years, organizations have started to develop their own data marketplace: the enterprise data marketplace. An EDM is developed by its own organization and supplies data products to internal and external data consumers. Examples of data products are reports, data services, data streams, batch files, etcetera. The essential difference between an enterprise data warehouse and an enterprise data marketplace is that with the former users are asked what they need and with the latter it is assumed that the marketplace owners know what the users need. Or in other words, we go from demand-driven to supply-driven. This all sounds easy, but it isn’t at all. In this session, the challenges of developing your own enterprise data marketplace are discussed.
Many who work within organizations that are in the early stages of their digital transformation are surprised when an accurate model — built with good intentions and capable of producing measurable benefit to the organization — faces organizational resistance. No veteran modeler is surprised by this because all projects face some organizational resistance to some degree. This predictable and eminently manageable problem simply requires attention during the project’s design phase. Proper design will minimize resistance and most projects will proceed to their natural conclusion – deployed models that provide measurable and purposeful benefit to the organization. Keith will share carefully chosen case studies based upon real world projects that reveal why organizational resistance was a problem and how it was addressed.
Business teams are raising the bar on Business Intelligence and Datawarehouse support. BI competence centers and data managers have to respond to expanding requirements: offer more data, more insight, maximal quality and accuracy, ensuring appropriate governance, etc. All to create guidance for enhancing their business. The promise of new technologies such as Artificial Intelligence is attracting increased business interest and stimulates data-driven innovation and accelerated development of smarter applications. Data Scientist teams grow, and can take over the lead from BI competence centers.
– How should such developments, which make sense from a business improvement perspective, be supported by data management activity?
– How to control privacy and create an effective data governance strategy?
– It becomes more challenging to design appropriate data warehouses, BI functionality and data access control when business interests change frequently and application development evolves rapidly, driven by “AI initiatives”.
This session will review techniques and technology for effective (meta) data management and smarter BI for widening data landscapes. We will elaborate the details for an appropriate governance approach supporting advanced Business Intelligence and Insight Exploration functions.
Read lessThe world of data warehousing has changed! With the advent of Big Data, Streaming Data, IoT, and The Cloud, what is a modern data management professional to do? It may seem to be a very different world with different concepts, terms, and techniques. Or is it? Lots of people still talk about having a data warehouse or several data marts across their organization. But what does that really mean today? How about the Corporate Information Factory (CIF), the Data Vault, an Operational Data Store (ODS), or just star schemas? Where do they fit now (or do they)? And now we have the Extended Data Warehouse (XDW) as well. How do all these things help us bring value and data-based decisions to our organizations? Where do Big Data and the Cloud fit? Is there a coherent architecture we can define? This talk will endeavor to cut through the hype and the buzzword bingo to help you figure out what part of this is helpful. I will discuss what I have seen in the real world (working and not working!) and a bit of where I think we are going and need to go in today and beyond.
Al jaren bestaat de wereld van Business Intelligence (BI) uit het bouwen van rapporten en dashboards. De BI-wereld om ons heen verandert echter snel. (Statistical) Analytics worden meer en meer ingezet, elke student krijgt gedegen R-training en het gebruik van data verplaatst zich van IT naar business. Maar zijn we wel klaar voor deze nieuwe werkwijze? Zijn we in staat om de nieuw verkregen inzichten te delen? En kunnen we echt het onderbuikgevoel van het management veranderen?
Tijdens deze presentatie gaan we in op deze veranderende wereld. We gaan in op hoe we het data-driven storytelling proces kunnen toepassen binnen BI-projecten, welke rollen zijn hiervoor nodig en u krijgt handvatten om nieuw verkregen inzichten te communiceren via storytelling.
• Inzicht in het Data-driven storytelling process
• Visuele data exploratie
• Organisatorische wijzigingen
• Communiceren via Infographics
• Combineren van data, visualisatie en een verhaal.
The close links between data quality and business intelligence & data warehousing (BI/DW) have long been recognised. Their relationship is symbiotic. Robust data quality is a keystone for successful BI/DW; BI/DW can highlight data shortcomings and drive the need for better data quality. A key driver for the invention of data warehouses was that they would improve the integrity of the data they store and process.
Despite this close bond between these data disciplines, their marriage has not always been a successful one. Our industry is littered with failed BI/DW projects, with an inability to tackle and resolve underlying data quality issues often cited as a primary reason for failure. Today many analytics and data science projects are also failing to meet their goals for the same reason.
Why has the history of BI/DW been plagued with an inability to build and sustain the solid data quality foundation it needs? This presentation tackles these issues and suggests how BI/DW and data quality can and must support each other. The Ancient Greeks understood this. We must do the same.
This session will address:
Met de komst van cloud computing is het mogelijk geworden om data sneller te verwerken en infrastructuur mee te laten schalen met de benodigde opslag en cpu capaciteit. Waar voorheen batch computing de norm was en grote datawarehouses werden ontwikkeld, zien we een transitie naar data lakes en real-time verwerking. Eerst kwam de lambda architectuur die naast de batch processing een streaming processing layer toevoegde. En sinds 2014 zien we dat de kappa architectuur de batch processing layer uit de lambda architectuur helemaal weglaat.
Deze presentatie gaat in op Kappa architecturen. Wat zijn de voor- en nadelen van het verwerken van data langs deze architectuur ten opzichte van de oude batch verwerking of de tussentijdse lambda architectuur? Dit vraagstuk zal worden behandeld aan de hand van ervaringen bij KPN met een product gebaseerd op een Kappa architectuur: de Data Services Hub.
Centraal bij de beantwoording staan de aspecten die tegenwoordig worden toebedeeld aan innovatieve technologieën: homogenisatie en ontkoppeling, modulariteit, connectiviteit, programmeerbaarheid en het kunnen profiteren van ‘gebruikers’ sporen.
Following up on its successful predecessor we are happy to announce the release of Quipu 4.0. We’re taking things a step further by introducing the next level in data management automation using patterns as guiding principle. Making data warehouse automation, data migration, big data applications and similar projects much faster and easier. Together with customers we can develop and add new building blocks fast, putting customer requirements first. In this presentation we highlight our vision and invite you to be part of our development initiative.
Read lessWe have known public data marketplaces for a long time. These are environments that provide all kinds of data products that can be purchased or used. In recent years, organizations have started to develop their own data marketplace: the enterprise data marketplace. An EDM is developed by its own organization and supplies data products to internal and external data consumers. Examples of data products are reports, data services, data streams, batch files, etcetera. The essential difference between an enterprise data warehouse and an enterprise data marketplace is that with the former users are asked what they need and with the latter it is assumed that the marketplace owners know what the users need. Or in other words, we go from demand-driven to supply-driven. This all sounds easy, but it isn’t at all. In this session, the challenges of developing your own enterprise data marketplace are discussed.
Agile techniques emphasise the early and frequent delivery of working software, stakeholder collaboration, responsiveness to change and waste elimination. They have revolutionised application development and are increasingly being adopted by DW/BI teams. This course provides practical tools and techniques for applying agility to the design of DW/BI database schemas – the earliest needed and most important working software for BI.
The course contrasts agile and non-agile DW/BI development and highlights the inherent failings of traditional BI requirements analysis and data modeling. Via class room sessions and team exercises attendees will discover how modelstorming (modeling + brainstorming) data requirements directly with BI stakeholders overcomes these limitations.
Learning objectives
You will learn how to:
Who Should Attend
You receive a free copy of the book Agile Data Warehouse Design by Lawrence Corr.
Agile Dimensional Modeling Fundamentals
Dimensional Modelstorming Tools
Star Schema Design
How Much/How Many: Designing facts, measures and KPIs (Key Performance Indicators)
Who & What dimension patterns: customers, employees, products and services
When & Where dimension patterns: dates, times and locations
Why & How dimension patterns: cause and effect
Supervised learning solves modern analytics challenges and drives informed organizational decisions. Although the predictive power of machine learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision making for residual impact. And while unsupervised methods open powerful analytic opportunities, they do not come with a clear path to deployment. This course will clarify when each approach best fits the business need and show you how to derive value from both approaches.
Regression, decision trees, neural networks – along with many other supervised learning techniques – provide powerful predictive insights when historical outcome data is available. Once built, supervised learning models produce a propensity score which can be used to support or automate decision making throughout the organization. We will explore how these moving parts fit together strategically.
Unsupervised methods like cluster analysis, anomaly detection, and association rules are exploratory in nature and don’t generate a propensity score in the same way that supervised learning methods do. So how do you take these models and automate them in support of organizational decision-making? This course will show you how.
This course will demonstrate a variety of examples starting with the exploration and interpretation of candidate models and their applications. Options for acting on results will be explored. You will also observe how a mixture of models including business rules, supervised models, and unsupervised models are used together in real world situations for various problems like insurance and fraud detection.
Analytic Practitioners, Data Scientists, IT Professionals, Technology Planners, Consultants, Business Analysts, Analytic Project Leaders.
1. Model Development Introduction
Current Trends in AI, Machine Learning and Predictive Analytics
2. Strategic and Tactical Considerations in Binary Classification
3. Data Preparation for Supervised Models
4. The Tasks of the Model Phase
5. What is Unsupervised Learning?
6. Wrap-up and Next Steps
Agile techniques emphasise the early and frequent delivery of working software, stakeholder collaboration, responsiveness to change and waste elimination. They have revolutionised application development and are increasingly being adopted by DW/BI teams. This course provides practical tools and techniques for applying agility to the design of DW/BI database schemas – the earliest needed and most important working software for BI.
The course contrasts agile and non-agile DW/BI development and highlights the inherent failings of traditional BI requirements analysis and data modeling. Via class room sessions and team exercises attendees will discover how modelstorming (modeling + brainstorming) data requirements directly with BI stakeholders overcomes these limitations.
Learning objectives
You will learn how to:
Who Should Attend
You receive a free copy of the book Agile Data Warehouse Design by Lawrence Corr.
Agile Dimensional Modeling Fundamentals
Dimensional Modelstorming Tools
Star Schema Design
How Much/How Many: Designing facts, measures and KPIs (Key Performance Indicators)
Who & What dimension patterns: customers, employees, products and services
When & Where dimension patterns: dates, times and locations
Why & How dimension patterns: cause and effect
Supervised learning solves modern analytics challenges and drives informed organizational decisions. Although the predictive power of machine learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision making for residual impact. And while unsupervised methods open powerful analytic opportunities, they do not come with a clear path to deployment. This course will clarify when each approach best fits the business need and show you how to derive value from both approaches.
Regression, decision trees, neural networks – along with many other supervised learning techniques – provide powerful predictive insights when historical outcome data is available. Once built, supervised learning models produce a propensity score which can be used to support or automate decision making throughout the organization. We will explore how these moving parts fit together strategically.
Unsupervised methods like cluster analysis, anomaly detection, and association rules are exploratory in nature and don’t generate a propensity score in the same way that supervised learning methods do. So how do you take these models and automate them in support of organizational decision-making? This course will show you how.
This course will demonstrate a variety of examples starting with the exploration and interpretation of candidate models and their applications. Options for acting on results will be explored. You will also observe how a mixture of models including business rules, supervised models, and unsupervised models are used together in real world situations for various problems like insurance and fraud detection.
Analytic Practitioners, Data Scientists, IT Professionals, Technology Planners, Consultants, Business Analysts, Analytic Project Leaders.
1. Model Development Introduction
Current Trends in AI, Machine Learning and Predictive Analytics
2. Strategic and Tactical Considerations in Binary Classification
3. Data Preparation for Supervised Models
4. The Tasks of the Model Phase
5. What is Unsupervised Learning?
6. Wrap-up and Next Steps
Limited time?
Can you only attend one day? It is possible to attend only the first or only the second conference day and of course the full conference. The presentations by our speakers have been selected in such a way that they can stand on their own. This enables you to attend the second conference day even if you did not attend the first (or the other way around).