Build Schedule

Advanced Filters:
  • Track

    clear all





  • Speaker


  • Level


  • Session Type

Sessions Found: 29
I don’t need machine learning to predict the outcome of you learning Python for Machine Learning, it will improve your career. In this session we will cover the basics of machine learning, the tools to start working with ML in Python and touch on how to deploy a model with Docker and Apache Flask.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Beginner

Session Code:

Date: July 14

Time: 11:00 AM - 12:00 PM

Room: Theatre4

In this session let us find about (technical overview)what are the foundations and design goals of Azure CosmosDB.  There are many benefits which will fit for web, mobile and globally distributed applications that need elastic scaling, high availability with a predictable performance which can elevate ease of development with NoSQL capabilities that every developer/architect/DBA should know about.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Azure & Cloud Data Services

Level: Intermedia

Session Code:

Date: July 14

Time: 8:30 AM - 9:30 AM

Room: Theatre2

Azure Data Factory version 2 (ADFv2) arrived in Sept17 with a bunch of new concepts and features to support our Azure data integration pipelines. In this session, we’ll update your ADFv1 knowledge and start to understand the true nature of scale out control flows and data flows. What’s the integration runtime? Can we easily lift and shift our beloved SSIS packages into the cloud? How do we embed expressions to achieve dynamic activity executions? Do we still need SSIS with the ADF platform as a service? The answers to all these questions and more in this demo packed session. An awareness of Azure Data Factory v1 is recommended before attending this session.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Intermedia

Session Code:

Date: July 14

Time: 1:30 PM - 2:30 PM

Room: Theatre3

Microsoft's Platform as a Service solution,  SQL Azure is a compelling solution for many who don't want to manage their own highly available SQL implementation. SQL Azure however does not replicate all of the services of on-premesis SQL, and one of these missing is the SQL Agent. This session looks at what alternatives exist for running and managing SQL jobs in Azure without SQL agent. In particular we will focus on Azure Automation and Azure Functions. The presentation will include a brief overview of the two services and how they are applicable to SQL workloads, followed by a demo of creating and running a SQL job.
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Azure & Cloud Data Services

Level: Intermedia

Session Code:

Date: July 14

Time: 11:00 AM - 12:00 PM

Room: Theatre2

This session will cover the basics of dynamic SQL; how, why and when you may wish to use it with demos of use cases and scenarios where it can really save the day (trying to perform a search with a variable number of optional search terms, anyone?). We will also cover the performance and security impacts touching on the effect on query plans, index usage and security (SQL injection!) along with some best practices.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database Administration & Development

Level: Beginner

Session Code:

Date: July 14

Time: 8:30 AM - 9:30 AM

Room: Theatre4

Blockchain is a transformational technology with the potential to extend digital transformation beyond an organization and into the processes it shares with suppliers, customers, and partners. 
What is blockchain? What can it do for my organization? How can your organisation manage a blockchain implementation? How does it work in Azure?
Join this session to learn about blockchain and see it in action. We will also discuss the use cases for blockchain, and whether it is here to stay.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Beginner

Session Code:

Date: July 14

Time: 1:30 PM - 2:30 PM

Room: Theatre2

The software development landscape is changing. More and more, there is an increased demand for AI and cloud solutions.  As a user buying cinema tickets online, I would like to simply ask "I want to buy two cinema tickets for the movie Dunkirk, tomorrow's viewing at 1pm" instead of manually following a pre-defined process. 

In this session, we will learn how to build, debug and deploy a chatbot using the Azure Bot Service. We will enrich it using the Microsoft Cognitive suite to achieve human like interactions. 

Will it pass the Turing test, no, but we can extend the bot service using Machine Learning (LUIS), APIs (Web Apps) and Worflows (Logic Apps).
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Beginner

Session Code:

Date: July 14

Time: 9:45 AM - 10:45 AM

Room: Theatre2

Learn how to build and deploy cubes to Azure Analysis Services. Including partition management, scale out for performance, security, setting up the processing so that it doesn't impact query performance, monitoring, connecting to Power BI and much more.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Intermedia

Session Code:

Date: July 14

Time: 9:45 AM - 10:45 AM

Room: Theatre3

In this session to look at the functionality built into SQL Server Management Studio (SSMS) and in Azure SQL Database to classify your database.
I run through the need for data classification in general and why its important for GDPR. We will run though classifying your data and how to report on it.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Database Administration & Development

Level: Intermedia

Session Code:

Date: July 14

Time: 11:00 AM - 12:00 PM

Room: Theatre1

As a SQL DBA you want to know that your SQL Server Estate is compliant with the rules that you have set up. Now there is a simple method to set this up using PowerShell and you can get the results in PowerBi or a report emailed to you on a schedule. Details such as

How long since your last backup?
How long since your last DBCC Check?
Are your Agent Operators Correct?
Is AutoClose, AutoShrink, Auto Update Stats set up correctly?
Is DAC Allowed?
Are your file growth settings correct, what about the number of VLFs?
Is your latency, TCP Port, PS remoting as you expect?
Is Page Verify, Data Purity, Compression correctly set up?

and many more checks (even your own) can be achieved using the dbachecks PowerShell module brought to you by the dbatools team.

Join one of the founders of the module, Rob Sewell MVP. and he will show you how easy it is to use this module and release time for more important things whilst keeping the confidence that your estate is as you would expect it
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database Administration & Development

Level: Intermedia

Session Code:

Date: July 14

Time: 8:30 AM - 9:30 AM

Room: Theatre5

Until SQL Server 2016, the Query Optimizer and the Execution Engine were strictly separated. The Query Optimizer produces an execution plan that, based on statistics and estimates, should be fast. That execution plan is then faithfully executed by the Execution Engine, even if reality turns out to be different from expectations.

SQL Server 2017 changes this! Three new features now allow execution plans to adapt to reality. Memory Grant Feedback increases or decreases assigned memory based on past experience. The Adaptive Join operator allows the optimizer to create two alternative plans, the best of which will be decided at execution time. And with Interleaved Execution, parts of the plan are even completely recompiled mid-execution, with much better cardinality estimates.

If you are more interested in how all this ACTUALLY works than in shiny marketing slides, come to this session. We will spend the full 60 minutes knee-deep in execution plan internals!
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database Administration & Development

Level: Advanced

Session Code:

Date: July 14

Time: 9:45 AM - 10:45 AM

Room: Theatre4

It’s an age-old problem: developers want prod data for dev and test purposes.

It helps them to write better code and to test it effectively. Self-service access to usable test-data aligns well with DevOps principles that encourage teams to adopt a shift-left mentality to testing.

Unfortunately, in the age of data breaches and the GDPR it’s simply illegal to give developers access to some types of sensitive production data.

So what do you do?

In this session I’ll talk about the GDPR, anonymisation, pseudonymisation and 5 techniques you can use to provide appropriate data that is as “production-like” as possible (within the legal and technical constraints). I’ll demo these techniques both in raw T-SQL and using some of the Microsoft and third party tools that are available to make the task easier.

After this session you’ll be equipped to discuss the problem with your colleagues in an informed manner and you’ll be able to suggest several solutions and their relative pros and cons.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Database Administration & Development

Level: Intermedia

Session Code:

Date: July 14

Time: 1:30 PM - 2:30 PM

Room: Theatre5

Using Extended Properties you can provide data model documentation directly from your solutions for SSAS and PowerBI files.  We will covere how and where you add the properties, how you can access them using PowerShell and how to extract and display them using good old SSRS.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Intermedia

Session Code:

Date: July 14

Time: 4:00 PM - 5:00 PM

Room: Theatre2

PowerApps are a relatively new technology released within the Azure stack.  Its primary focus is to empower technical or non-technical developers to create quick and simple custom apps.  They can be used as standalone apps or embedded within websites/mobile apps.  On the other hand, Power BI is now a mature product and needs little introduction.

What if we could use PowerApps and Power BI together?  Can we embed a PowerApp into a Dashboard and give the end user the ability to write back data to a visual?  Come along to my session to find out.

The 1-hour session includes other technologies such as Azure SQL Database, Common Data Service (CDS) and Flow.  In particular, I will demonstrate how you integrate these technologies with PowerApps and Power BI.  Whilst all levels of expertise are welcome, users require prior knowledge of Power BI Dashboards and PowerApps.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Intermedia

Session Code:

Date: July 14

Time: 8:30 AM - 9:30 AM

Room: Theatre3

Showing how a Database Monitoring tool can be used in conjunction with a DevOps process can help prevent shipping performance problems to production. In this session we will be showing how you can

Use data captured from Database Monitoring tool to validate performance testing

Use a Database Monitoring tool as an input to release gates, to only release when defined performance metrics have been met.

Use VSTS extensions to visualise performance test failures

This allows us to automatically fail releases based on poor performance as captured by a baseline in downstream environments, and surface that information in the release tool so that a release manager can understand why a release in the pipeline was marked as failed. This will typically be things like, high CPU, logical IO, missing indexes, or deadlocks.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Database Development & Continuous Integration

Level: Intermedia

Session Code:

Date: July 14

Time: 9:45 AM - 10:45 AM

Room: Theatre5

Implementing large-scale, complex Data Analytic projects utilizing multiple technologies 
(AZURE SQL, Datawarehouse, Data Lake, Data Factory, PowerBI) and more is complex.
Managing the teams that can deliver the complex architecture designs can quickly become a challenge, learn the key lessons from a real-world project that was delivered, avoiding problem areas that can derail such projects.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Azure & Cloud Data Services

Level: Beginner

Session Code:

Date: July 14

Time: 11:00 AM - 12:00 PM

Room: Theatre3

A task seems to be easy. Maintenance a project of a database in the code repository, treat as master-version and do deployment evenly and frequently. Simple? Seemingly. The things become more complex as fast as an amount of objects in database growing. While instead of one database, we have over a dozen. When databases have got the references to each other. And how about dictionary tables? Where to keep them and how to script? Additional issues are coming whilst we would like to control instance-level objects.
All these topics I will explain on the session focused on practical aspects of work with Microsoft Visual Studio Data Tools.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database Development & Continuous Integration

Level: Intermedia

Session Code:

Date: July 14

Time: 4:00 PM - 5:00 PM

Room: Theatre5

You’ve just been given a server that is having problems and you need to diagnose it quickly. This session will take you through designing your own toolkit to help you quickly diagnose a wide array of problems. We will walk through scripts that will help you pin point various issues quickly and efficiently. This session will take you through;

What’s on fire? – These scripts will help you diagnose what’s happening right now
Specs – What hardware are you dealing with here (you’ll need to know this to make the appropriate decisions)?
Settings – are the most important settings correct for your workload?
Bottlenecks – We’ll see if there are any areas of the system that are throttling us.
By the end of this session you should have the knowledge of what you need to do in order to start on your own kit. This kit is designed to be your lifeline to fix servers quickly and get them working.

All code we’ll go through is either provided as part of this presentation or are open source/community.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Database Administration & Development

Level: Intermedia

Session Code:

Date: July 14

Time: 4:00 PM - 5:00 PM

Room: Theatre1

Technology changes quickly - patterns and approaches less so. As people move towards the cloud, there are clear benefits of adopting a polyglot cloud architecture employing a range of distributed components.

This session will take you through the pattern known as the Lambda architecture, a reference pattern for building data analytics systems that can handle any combination of data velocity, variety and volume. The session will outline the set of tools and integration points that can underpin the approach. Do you design real-time reporting systems? Or crunch petabytes of data? Perhaps you are adopting a cloud architecture and just want to handle anything the future throws at you? This session is for you.

We will follow the movement of data through batch and speed layers via Azure Data Lake Store & Analytics and Streaming Analytics before considering the serving layer with Azure SQL DataWarehouse and downstream reporting tools.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Business Intelligence & Data Science

Level: Advanced

Session Code:

Date: July 14

Time: 4:00 PM - 5:00 PM

Room: Theatre3

The concurrency model of most Relational Database Systems are defined by the ACID properties but as they aim for ever increasing transactional throughput, those rules are bent, ignored, or even broken.

In this session, we will investigate how SQL Server implements transactional durability in order to understand how Delayed Durability bends the rules to remove transactional bottlenecks and achieve improved throughput. We will take a look at how this can be used to compliment In-Memory OLTP performance, and how it might impact or compromise other things.

Attend this session and you will be assimilated!
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Database Administration & Development

Level: Intermedia

Session Code:

Date: July 14

Time: 1:30 PM - 2:30 PM

Room: Theatre1

Sessions Found: 29
Back to Top cage-aids
cage-aids
cage-aids
cage-aids