Build Schedule

Advanced Filters:
  • Track

    clear all






  • Speaker


  • Level


  • Session Type



Sessions Found: 35
We have all now had a play around with Docker and Containers or at least heard about them.

This demo heavy session will walk through some of the challenges around managing container environments and how Kubernetes orchestration can help alleviate some of the pain points. 

We will be talking about what Kubernetes is and how it works and through the use of demos we will:

- Highlight some of the issues with getting setup (Specifically Minikube on Ubuntu), 
- Deploying/Updating containers in Kubernetes (on-Prem as well as AKS using Azure DevOps)
- Persisting data
- How to avoid making the same mistakes as I have
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Intermedia

Session Code:

Date: February 01

Time: 10:15 AM - 11:15 AM

Room: Spey

Data compliance in the modern technology landscape feels like a constantly moving target as more and different laws, rules and regulations are passed locally, nationally and internationally. The days when only some organizations or certain countries had to worry about data compliance are gone. It’s everyone’s problem.

However, it is possible to define a core set of processes that will help to enable your ability to assist your business, or government agency, in meeting these compliance requirements. This session will walk you through the 10 steps you need to implement in order to move your organization towards full compliance with any, or all, of the regulations we all now face. From identifying where your data lives to monitoring for compliance and all the steps in between, you can meet this challenge.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Intermedia

Session Code:

Date: February 01

Time: 1:30 PM - 2:30 PM

Room: Oban

How do the wait stats show you that you have a locking issue?
This session will show you how to detect and view blocking and lock waits, and understand the cause of it.
An extensive walkthrough of the different isolation levels and their respective benefits and drawbacks.
And finally a real-world quick list of suggestions on what you can do to solve some of the common issues I come across in my daily work.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Intermedia

Session Code:

Date: February 01

Time: 2:45 PM - 3:45 PM

Room: Lomond

SQL Saturday Edinburgh (SQLSat927) GOLD sponsor session.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Sponsors session (30 minutes)

Track:
Sponsors Session

Level: Beginner

Session Code:

Date: February 01

Time: 12:45 PM - 1:15 PM

Room: Jura

With ever increasing complexity in data platform and application solutions, it is becoming ever more important to take people out of the loop when it comes to system provisioning. Infrastructure as code is the way forward with Azure Resource Manager Templares, Google Deployment Manager, or AWS Cloud Formation.

This is where Terraform from Hashicorp can step in, one solution that has a provider model that will interact with Azure, AWS, Google, and others. Meaning that you only need to learn one syntax. Add that to the automation potential and now we have something that can really help get us down the road to infratructure as code.

This session will take an introductory look at how infrastructure can be defined as code and be shipped to standardise the deployment process and minimise the chance of mistakes creeping in when deployed by different memebers of Development or Operations.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Beginner

Session Code:

Date: February 01

Time: 11:30 AM - 12:30 PM

Room: Oban

Azure Artificial Intelligence and Machine learning models are invoked as functions within Power BI dataflow to create a powerful dataset for your Power BI Reports.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
AI(Artificial Intelligence) & Machine Learning

Level: Intermedia

Session Code:

Date: February 01

Time: 1:30 PM - 2:30 PM

Room: Spey

Azure Cosmos DB is Microsoft’s premier NoSQL Cloud-based globally distributed database offering, providing scalable performance and resiliency, customizable consistency guarantees, multiple data models APIs, and comprehensive service level agreements.

In this session, we will explain how to get started in Cosmos DB and demonstrate simple administrative and development operations so you can learn how to go from zero to hero in no time. We will cover many fundamental topics which include:
* Cosmos DB APIs
* Accounts, Databases, and Containers
* Geo-Replication
* Partitioning and indexing
* Consistency and throughput

Azure Cosmos DB is not just the future for Online-Transaction Processing, it is the present!
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Beginner

Session Code:

Date: February 01

Time: 9:00 AM - 10:00 AM

Room: Oban

Welcome to Azure DevOps Duet, a tale about how a development team and an operations team have to bond together and start using Azure DevOps for SQL Server related deployments.

This session will cover the process of developing a CI/CD process starting at getting the team on board and ending with making an actual release.

We will discuss

- the challenges of implementing DevOps
- setting up a database solution project
- getting started with source control
- testing your database releases using tSQLt
- setting up your Azure Devops pipelines

After this session you will have the tools and knowledge to get started with DevOps and get your development process to the next level.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Extended Session (90 minutes)

Track:
Enterprise Database Administration & Development

Level: Intermedia

Session Code:

Date: February 01

Time: 11:30 AM - 1:00 PM

Room: Arran

Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale.

In this session we explore Azure Synapse Analytics, we will dive into this limitless analytics service and explore how it brings together enterprise data warehousing and Big Data analytics.

We will explore;
*Data ingestion
*SQL Pools - Data Warehouses
*Spark Pools 
*SQL-on-Demand
*PowerBI
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
BI & Analytics/Visualization

Level: Beginner

Session Code:

Date: February 01

Time: 4:00 PM - 5:00 PM

Room: Arran

Databricks is a Unified Analytics Platform making it easier than ever to do big data analytics on cloud. However, there are a lot of things you need to know and take into account before diving head first into a Data Lake. This session is intended for architects and developers who are looking to build a massive scale data storing and processing solution. I will go through the Best Practices for the purpose. In addition, I will demonstrate how to unify real-time and batch processing using Azure Databricks. As a result, you should feel comfortable building your own Data Lake for your big data processing needs.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Intermedia

Session Code:

Date: February 01

Time: 4:00 PM - 5:00 PM

Room: Lomond

Remember those "choose your own adventure" books from when we were younger? Yep, that's what we're doing in this session.

We're going to start with a poorly performing query and choose which route we're going to take to see if we can make it better.

You don't need any previous experience in performance tuning for this session, we'll briefly cover each topic as it's chosen.

Potential topics include;
- Indexes
- Query Design
- Settings that might affect performance
- Scalar Functions
- SARGability
- Temp Tables

We'll be voting on where we go at each step to see where our journey takes us.

In the session we'll cover each topic a little but I will also provide a more in-depth explanation of everything we go through after the session if you want to learn more.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Beginner

Session Code:

Date: February 01

Time: 11:30 AM - 12:30 PM

Room: Spey

In this session we will look at a couple of approaches to create a datalake on a budget. The samples will use Python, Spark and some Databricks. It will all be done in Azure, but we will discuss how you could set this up on-prem as well.

You get to decide how far you want to go, from cost-effective to penny pinching. Don't worry if you've never used any of these technologies, I will start at the beginning.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Intermedia

Session Code:

Date: February 01

Time: 1:30 PM - 2:30 PM

Room: Jura

One major challenge in the age of "Big Data" is keeping up with the volume and velocity of data with respect to moving it to the Data Warehouse. In this session, attendees will learn about Microsoft's answer to this problem; Azure Data Factory (ADF), and specifically, Mapping Data Flows. ADF Mapping Data Flows enables data engineers to construct, execute, and monitor data pipelines, based on the highly scalable Azure Databricks Spark Engine, with little to no code.

This session will be very demo-heavy, with demonstrations of effective patterns and practices that have been deployed by some of Microsoft's largest customers throughout the world.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
BI & Analytics/Visualization

Level: Intermedia

Session Code:

Date: February 01

Time: 9:00 AM - 10:00 AM

Room: Jura

Description
That's it, data scientists have left the house! Behind them, some scripts written in Python or R, thousands of CSV files, three sheets and two whiteboards of mathematical equations, many PowerPoint presentations and a clear instruction from the CEO: go to production ASAP! Unfortunately, no trace of a deployment procedure. Hopefully, this session, will explain how to industrialize data scientists' scripts. How to import and refactor code written in Jupyter Notebooks within VS Code, How to put in place the best practices of DevOps and apply them to Machine Learning with Azure Pipelines and some other tips and tricks for a successful go-live.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
AI(Artificial Intelligence) & Machine Learning

Level: Intermedia

Session Code:

Date: February 01

Time: 9:00 AM - 10:00 AM

Room: Spey

Aimed mostly at administration and based on real life scenarios, this popular audience interactive session we will go through some scenarios DBA's might encounter whilst dealing with SQL Server databases and you will be provided with some options about what to do. 

Members of the audience can then select from the options provided and we will follow that path and see what the outcome is from there. Similar to a role-playing game.

Each selection will have a different outcome, and along the way you will probably learn some new things.

As some of you may have seen on the blog www.KevinRChant.com I've already had dealing with SQL Server 2019. Therefore, this session has been updated to include SQL Server 2019 content.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Intermedia

Session Code:

Date: February 01

Time: 4:00 PM - 5:00 PM

Room: Oban

DevOps has fundamentally changed the way we manage IT. Those practicing DevOps well aren't just outpacing their competitors, they are annihilating them. DevOps brings a whole host of new practices, tools and buzzwords. It has made some roles and  tasks redundant, while new opportunities have been created.

We all need to learn to survive in this new reality.

In this session we'll cut through the hype and look at the key concepts and findings from the Puppet State of DevOps Reports,  The DevOps Handbook and Microsoft's work in this space.

We will finish with an overview of an Azure DevOps Services project that automatically tests and deploys the code from a SQL Server Data Tools (SSDT) project in a public GitHub repo.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Intermedia

Session Code:

Date: February 01

Time: 9:00 AM - 10:00 AM

Room: Lomond

Microsoft’s data platform and SQL Server comes with a plethora of High Availability features. Some of these features can work hand-in glove with each other to allow you to configure your SQL Servers to be both Highly-Available and recoverable in the event of the worst happening. 

If you are driven by the '9s', have strict SLAs and up time is key to you and your business, then combining SQL Server’s high availability features is something you should consider. 

In this session we will look at how we can combine SQL Server Availability Groups and SQL Server Failover cluster instances to keep our servers both highly available and maintain a secondary disaster recovery.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Beginner

Session Code:

Date: February 01

Time: 1:30 PM - 2:30 PM

Room: Arran

Artificial Intelligence is dominating the world and not most of them had any chance to experience these features, due to various complicated reasons. How about if this can be made very easy and to exploit these AI features to visualise in our day to day Power BI? This AI session will cover all AI capabilities within Power BI and also using Azure Cognitive Services to build a live BOT & QnA, Vision (Face API), Language (Text Analytics).
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
AI(Artificial Intelligence) & Machine Learning

Level: Intermedia

Session Code:

Date: February 01

Time: 10:15 AM - 11:15 AM

Room: Arran

Starting with version 2017, SQL server was supported on Docker and Linux. With SQL Server 2019, you will be able to run a container, or a whole AlwaysOn Availability Group on Kubernetes.
This session will drive DBAs on their path to modernize their skills. Starting with a single container on a Docker host, the session will also cover Big Data Cluster creation and usage though T-SQL and basic Python scripts.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Development

Level: Intermedia

Session Code:

Date: February 01

Time: 2:45 PM - 3:45 PM

Room: Spey

SQL Saturday Edinburgh (SQLSat927) GOLD sponsor session.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Sponsors session (30 minutes)

Track:
Sponsors Session

Level: Beginner

Session Code:

Date: February 01

Time: 12:45 PM - 1:15 PM

Room: Arran

Sessions Found: 35
Back to Top cage-aids
cage-aids
cage-aids
cage-aids