Build Schedule

Advanced Filters:
  • Track

    clear all



  • Speaker


  • Level


  • Session Type

Sessions Found: 21
Constraining everything in development and testing is database size. Storage is a bottleneck, and while faster SSD/Flash is available, it is often too expensive due to the size of many databases, especially if not production. More importantly, it takes a lot of time to push terabytes around, and time itself is expensive. The old joke about "good, fast, and cheap -- pick any two" is very true with data.

Because providing full databases for each developer on each project seems unrealistic, when each copy might require terabytes of storage, for decades everyone has limited themselves to working in shared non-production environments that are refreshed only every few months at best. Conflicts occur,  quality suffers, and things move slowly.

Come learn why data virtualization is the solution to a problem everyone knows.

Good, fast, and cheap -- have all three with data virtualization.  Clone things easily and quickly, and remove the biggest constraint to development and testing.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database administration and performance management

Level: Intermedia

Session Code:

Date: June 02

Time: 11:00 AM - 12:00 PM

Room: Montreal A - 516

With the vast amount of changes that occur in our daily business environments, it becomes more and more difficult to achieve our corporate goals without some ‘lighthouse’ to guide our way. Data mining, (while not a panacea to resolve or ‘control’ the effects of these changes), can provide us with statistical trends by analyzing our data and highlighting probable outcomes.

In this hands on BEGINNERS presentation we shall be looking at Microsoft SQL Server’s Data Mining capabilities and we shall be discussing:

1)  Defining what questions we want answered and how to go about this in an effective and efficient manner.

2)  Creating the data model.

3)  How to gather the necessary data, discussing the training and testing aspect.

4)  Processing the model.

5)  Extracting information from our finished model, discussing the implications of this information.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Intermedia

Session Code:

Date: June 02

Time: 11:00 AM - 12:00 PM

Room: Laval - 510

Graph databases have been around in various
forms for a long time, in a wide range of data analysis from science technology
to business scenarios. However, there may still be a small pool of T-SQL
experts with graph experience, especially dealing with Gremlin traversal query language.
That's fully understandable. The session is not aimed on encouraging users to stop
using T-SQL, but rather giving them a new performing blade on their data analysis Swiss knife so to
speak. This session will start with a quick overview of new Azure Cosmos DB - Graph API options
It will then expose main comparaison between T-SQL vs Gremlin, and reasons why
graph may be more beneficial.
It will then give a basic introduction of Gremlin Transversal topology in real context.
Finally a demo will cover how to create a new Azure Cosmos DB - Graph API project
from Visual Studio Code. 
1.Generating Vertex and Edges through Core 2.0 Console Application +  
2.Consuming Graph API data query in Core .2.0 Web client.
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Database Development and TSQL

Level: Advanced

Session Code:

Date: June 02

Time: 9:50 AM - 10:50 AM

Room: St-Laurent - 511

The power of cloud storage and compute power has made data warehousing possible for businesses of all sizes. What was once a large capital expenditure and multi-year implementation can now be deployed and ready to use within minutes and allow any organization to collect, query and discover insights from their structured data sources. 

With a full T-SQL interface and compatibility with the rest of the Microsoft data stack, Azure Data Warehouse can fit transparently into your business data strategy and leverage already existing and familiar development and management skills.
 
In this session we will look at the main concepts of the Azure SQL Data Warehouse service, how it's different than SQL Server and the advantages it provides to an on-premises solution.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Beginner

Session Code:

Date: June 02

Time: 1:15 PM - 2:15 PM

Room: St-Laurent - 511

Abstract:

Many companies start off with a simple data mart for reporting. As the company grows, users become dependent on the data mart for monitoring and making decisions on Key Performance Indicators (KPI).

Unexpected information growth in your data mart may lead to a performance impacted reporting system. In short, your users will be lining up at your cube for their daily reports.

How do you reduce the size of your data mart and speed up data retrieval?

This presentation will review the following techniques to fix your woes.

Coverage:

1 – What is horizontal partitioning?
2 – Database sharding for daily information.
3 – Working with files and file groups.
3 – Partitioned views for performance.
4 – Table and Index partitions.
5 – Row Data Compression.
6 – Page Data Compression.
7 – Programming a sliding window.
8 – What is different in Azure SQL database?
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Advanced

Session Code:

Date: June 02

Time: 11:00 AM - 12:00 PM

Room: St-Laurent - 511

Would you like to use Azure SQL Database (PaaS) and keep its data in sync to your Datacenter? Then this session is for you! Let's walk thru the Database set up and how to use Data Sync to make sure data between your datacenter and Azure are synchronized. The Data Sync allows for bidirectional data transfer, meaning you can transfer data from your local Db to the Cloud and vice-versa.
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Database administration and performance management

Level: Intermedia

Session Code:

Date: June 02

Time: 9:50 AM - 10:50 AM

Room: Laval - 510

Artificial Intelligence (AI) is bringing big changes in the way people and businesses relate to technology.
As well as the arrival of the personal computer, the cloud computing and smartphones, the AI is the artificial technology that takes you to where you're going so much faster, intuitive and smart.
In this session we will discuss the basics of IA and how we can apply it in our business using the Azure Cognitive Services.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Beginner

Session Code:

Date: June 02

Time: 8:40 AM - 9:40 AM

Room: Montreal B - 517

Azure Databricks is a fast, easy and collaborative Apache® Spark™ based analytics platform optimized for Azure. Designed in collaboration with the founders of Apache Spark, Azure Databricks combines the best of Databricks and Azure to help you accelerate innovation with one-click set up; streamlined workflows and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts.
As a first-class Azure services, you automatically benefit native integration with other Azure services such as Power BI, SQL Data Warehouse, Cosmos DB as well as from enterprise-grade Azure security, including Active Directory integration, compliance, and enterprise-grade SLAs. 
Azure Databricks is well design to modernizing your existing data warehouse solutions for querying and reporting on data, capitalizing on Spark-based analytics for advanced analytics workloads, and real-time analytics on streaming data. 
During this presentation, you will discover what A
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Beginner

Session Code:

Date: June 02

Time: 8:40 AM - 9:40 AM

Room: Montreal A - 516

Récemment, suite à une panne matérielle, j'ai du migrer les bases d'un publisher et la distribution sur un autre serveur. Puisque ces bases sont très volumineuses et que je n'avais pas à ma disposition de failover cluster sur le distributeur, je ne souhaitais pas devoir réinitialiser la réplication transactionnelle pour éviter les interruptions de service.
Dans cette session, je vais expliquer comment j'ai réalisé cette migration. Si vous ne connaissez pas la réplication, cette session vous permettra de comprendre comment elle fonctionne et les différents agents. Si vous maîtrisez la réplication, vous découvrirez peut être une nouvelle méthode de migration.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database administration and performance management

Level: Advanced

Session Code:

Date: June 02

Time: 9:50 AM - 10:50 AM

Room: Montreal A - 516

SQL Server, base de données la plus vendue en nombre de licences, reste relativement facile à administrer, tout en offrant un niveau de performance de premier ordre et une fiabilité excellente. Combien de "pannes" avec vous eu lors des 5 ou 10 dernière années ? Attention, ne me faites pas dire que l'on peut se passer des systèmes de haute disponibilité, FCI ou groupes de disponibilité. Il suffit de déterminer le RPO et le RTO et de monnayer l'indisponibilité du serveur pour déterminer si une stratégie de disaster recovery suffit, ou bien s'il faut prévoir la mise en haute disponibilité des ressources SQL.
Au final, le logiciel est fiable et ne provoque que très rarement des erreurs, des "plantages". 
Par contre, au quotidien, cela ne veut pas dire qu'il ne va pas y avoir de problèmes sur l'instance, principalement de connexion, d'autorisation ou de performance.
Au cours de cette session, nous allons aborder les bases du diagnostic et du dépannage SQL Server.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database administration and performance management

Level: Intermedia

Session Code:

Date: June 02

Time: 1:15 PM - 2:15 PM

Room: Montreal A - 516

Leverage Open Source DB as a Service from Microsoft: Introducing MySQL and PostgreSQL on Azure

PostgreSQL and MySQL are the most popular open source relational database engines that are being widely embraced by developers. The Azure cloud is a first-class platform for open source technologies that allows you to bring the tools you love and skills you already have, and deploy any applications. Learn how Azure Database Services for PostgreSQL and MySQL could help you achieve high availability, security and scale on the fly.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Database Development and TSQL

Level: Beginner

Session Code:

Date: June 02

Time: 2:25 PM - 3:25 PM

Room: Montreal B - 517

Platform as Service offerings now enables developers to roll out their database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right provider and with the proper expectations. All of these are new challenges that are coming into the field of responsibility of the SQL DBA professional.

In this session, we'll map Cloud offerings, mainly Azure, that corresponds to the current features SQL Server DBAs use, their use cases for each one based on our client projects, and how they are different to the different SQL Server features we are familiar with. The goal is to get SQL Server DBAs on the latest Azure offerings and the echo of Cortana Intelligence.  If time allows, we will also show some offerings from Google Cloud.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database administration and performance management

Level: Intermedia

Session Code:

Date: June 02

Time: 2:25 PM - 3:25 PM

Room: Montreal A - 516

Abstract	You created a wonderful Power BI report, but when you open it you wait too much time. Changing a slicer selection is also slow. Where should you start analyzing the problem? What can you do to optimize performance?
This session will guide you in analyzing the possible reasons for a slow Power BI report. By using Task Manager and DAX Studio, you will be able to determine whether you should change the report layout, or if there is something in DAX formulas or in the data model that is responsible for the slow response.
At the end of this session, you will understand how to locate a performance bottleneck in a Power BI report, so you will focus your attention on the biggest issue.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Intermedia

Session Code:

Date: June 02

Time: 11:00 AM - 12:00 PM

Room: Montreal B - 517

Lors de cette session, nous survolerons l'ensemble des possibilités offertes par Power BI, de l'importation ou de la connexion aux données sources à la consommation de rapports sur le service web ou sur un serveur local (Power BI Report Server). Plus précisément, nous décortiquerons les options de:

 - importation et connexion aux données sources
 - transformation de données
 - modélisation de données
 - exploration de données
 - visualisation de données
 - publication et partage de rapports et de tableaux de bord
 - consommation de rapports et de tableaux de bord
 - analyse des données partagées 

En cours de présentation, nous allons également illustrer certains concepts par le biais de démos, de façon à ce que cette session, d'une durée d'une heure, propose un contenu à forte valeur ajoutée tout en étant dynamique et divertissante.
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Beginner

Session Code:

Date: June 02

Time: 2:25 PM - 3:25 PM

Room: St-Laurent - 511

Power Query is a great tool for extracting, transforming, and loading data.  It has an intuitive interface that allows you to create queries without having to worry about writing code. Under the covers Power Query is creating the M code that gets executed. In this session we will pull back the covers to reveal and understand the M code that is being created. This will give you greater insight into how to debug your queries. In addition, we will look at creating advance queries that go beyond what is available using the Power Query user interface.
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Intermedia

Session Code:

Date: June 02

Time: 3:35 PM - 4:35 PM

Room: Montreal B - 517

How often have you been told that an application is running "too slow"?  This statement is the cause of a great deal of investigation, frustration, and dead-ends for database professionals.

The problem won't always be a bad query, but when it is, knowing how to dive in, diagnose its performance, and resolve the situation efficiently will turn a potentially frustrating situation into a fun one!  Using that knowledge in development to prevent future performance issues will improve script quality and application design, while making your future self less burdened by performance emergencies.

This is an opportunity to identify common query mistakes and learn a variety of ways in which we can solve them.  This discussion will include query rewrites, indexing, statistics, database design, monitoring, execution plans, and more!

Demos of poor-performing queries will be provided to illustrate key optimization techniques, design considerations, and the tools you need to fix them.  Fast.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database administration and performance management

Level: Intermedia

Session Code:

Date: June 02

Time: 3:35 PM - 4:35 PM

Room: Montreal A - 516

DevOps is the new hot topic for IT, but only answers part of the problem. This session and demo will discuss why data creates continual friction in the DevOps environment and how it must be incorporated into the solution.
Reasons for this solution is required: -Data is getting bigger and more complex -Security concerns around critical data is becoming more evident every day -Data is created in silos in many sources, yet consumed in just as many locations, including on-premises and in the cloud.

We'll discuss the tech, the politics and the challenges of bringing data into DevOps and how to do so more successfully with culture changes, tools, scripting, and virtualization.

Takeaways from this session:
1. Learn the five principles of Data Ops 2. How embracing a dynamic data platform can eliminate challenges and provide automation. 3. Learn the difference between containers, packages and data pods. 4. Learn how to bridge the gap between data and people, eliminating culture from the scene
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Database administration and performance management

Level: Beginner

Session Code:

Date: June 02

Time: 3:35 PM - 4:35 PM

Room: St-Laurent - 511

Before any significant data analysis can take place the data often needs to be transformed, aggregated, and combined. This is often referred to as the ETL (Extraction, Transform, and Load) process. Power Query is an excellent tool in the Microsoft self-service BI stack that allows business users to discover, combine, and refine data before loading it into a Power Pivot model for further analysis. In addition Microsoft has made connecting to a wide variety of sources including relational, structured, and semi structured data a consistent intuitive experience. This session guides you through using Power Query to extract, transform, and load data from various sources into a Power Pivot model. In addition we will look at the M language created by the tool and look at some advanced queries you can create using M.
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Beginner

Session Code:

Date: June 02

Time: 1:15 PM - 2:15 PM

Room: Montreal B - 517

Reporting requests that are required ‘yesterday or sooner’, oft times necessitate working more efficiently and effectively. We have all been through this at one time or another.
In this hands on presentation we shall be looking some of the more challenging techniques of extracting our data from our Multidimensional and Tabular Models, in addition to our Data Mining Models. 
We shall be utilizing concepts of  ‘Openquery()’ and ‘linked servers’ as a means to extract data,  in addition to looking at the way that these two concepts can help us extract our data utilizing MDX, DMX and DAX expressions;  AND YET maintain the  flexibility and the ability to utilize all those wonderful techniques that may  be done with T-SQL.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Intermedia

Session Code:

Date: June 02

Time: 8:40 AM - 9:40 AM

Room: St-Laurent - 511

Every Power BI model has dates and the need of calculation over dates to aggregate and compare data, like Year-To-Date, Same-Period-Last-Year, Moving Average, and so on. Quick measures and DAX functions can help, but how do you manage holidays, working days, weeks based fiscal calendars and any non-standard calculation’
This session provides you the best practices to correctly shape a data model and to implement time intelligence calculations using both built-in DAX functions and custom DAX calculation for more complex and non-standard requirements.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Data warehouse and BI delivery

Level: Intermedia

Session Code:

Date: June 02

Time: 9:50 AM - 10:50 AM

Room: Montreal B - 517

Sessions Found: 21
Back to Top cage-aids
cage-aids
cage-aids
cage-aids