Build Schedule

Advanced Filters:
  • Track

    clear all





  • Speaker


  • Level


  • Session Type


Sessions Found: 36
If you are reading Azure IoT documentation you will stumble a lot on the “SQL-like query language” or “SQL-like language” syntagms.
In this presentation we will explore Azure IoT places where queries are used and see how to use them. And in the process to find out more about this “SQL-like” languages and how being a DB developer cam make you a IoT hero.
We will concentrate on the scenarios with the greatest impact, where a little SQL can solve you a lot of hassle.
So, we will leave no SQL query unturned ?? in Azure IoT Hub, Stream Analytics, Power Bi to name a few.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Intermedia

Session Code:

Date: January 24

Time: 1:45 PM - 2:45 PM

Room: Room 4

With the multitude of isolation levels, concurrency models, and specialist technologies available in SQL Server, it is no surprise that transaction throughput and correctness can be directly correlated to the ability and knowledge of the person that wrote the code.

In this session, we will reveal how SQL Server concurrency and correctness often goes wrong, how we can avoid this, and how we can use our knowledge to design and develop for optimal server throughput for our applications and processes using tips and tricks gained from real-world scenarios.

We will cover SQL Server’s traditional locking model, In-Memory OLTP, Columnstore, Delayed Durability, and many other technologies and techniques you can use to make your transactions more robust.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Development

Level: Intermedia

Session Code:

Date: January 24

Time: 10:30 AM - 11:30 AM

Room: Room 3

Query Store, a new feature released  with SQL Server 2016, can allow you to achieve wonders on the SQL Server query tunning universe: starting from ensuring that an upgrade will work and up to know exactly when a change on the database created a tunning problem, you will discover in this session how to make incredible tunning magics with query store.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Adminstration

Level: Beginner

Session Code:

Date: January 24

Time: 11:45 AM - 12:45 PM

Room: Room 3

Azure Databricks is an  Apache Spark–based analytics service for big data and data analytics on top.
In this session we will create Databricks scenarios for useful business scenarios.

Data engineers and business analysts (data scientists) can now work on RDD structured files using workbooks for collaborative projects, using ANSI SQL, R, Python or Scala, easily covering both analytical and machine learning solutions on one hand, and also giving the capabilities to use it as a datawarehouse.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Analytics

Level: Intermedia

Session Code:

Date: January 24

Time: 1:45 PM - 2:45 PM

Room: Room 5

Was können wir hier automatisieren, brauchen wir noch einen Data Scientist und was ist unsere (ich bin ja auch so einer) Rolle in dem Szenario. Kurzer Blick über die verschiedenen Bibliotheken (H2O, auto_ml, Azure AutoML). 

Warum und wie funktioniert das (aus einer hohen Flughöhe)? Was ist der Unterschied zwischen den Realisierungen? Was macht das eigentlich genau? Blick auch auf Tools wie Power BI, Azure ML Workspace, …

Fazit: Diese Methoden stecken nun schon nicht mehr in den Kinderschuhen, Power BI und viele andere Tools verwenden es wenn sie von AI sprechen. Aber ist das AI … wir werden sehen!
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Analytics

Level: Intermedia

Session Code:

Date: January 24

Time: 10:30 AM - 11:30 AM

Room: Room 2

DBAs and sysadmins never have time for the fun stuff. We are always restoring a DB for a dev or setting up a new instance for that new BI project. What if I told you that you can make all that time consuming busy-work disappear?

In this session we will learn to embrace the power of automation to allow us to sit back and relax..... or rather focus on the real work of designing better, faster systems instead of fighting for short time slots when we can do actual work.

Along the way we will see that we can benefit from the wide world of automation expertise already available to us and avoid re-inventing the wheel, again!
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Adminstration

Level: Beginner

Session Code:

Date: January 24

Time: 3:00 PM - 4:00 PM

Room: Room 1

Has your manager come to you and said "I expect the SQL Server machines to have zero downtime?" Have you been told to make your environment "Always On" without any guidance (or budget) as to how to do that or what that means? Are you facing pressure to have data in Azure as well? Help is here! This session will walk you through the high availability options in on-premises SQL Server, the high availability options in Azure SQL Database and Managed Instances, and how some or all of those can be combined to enable you to achieve the ambitious goals of your management. Beyond the academic knowledge, we'll discuss frequently seen scenarios from the field covering exactly how your on-premises environments and Azure services can work together to keep your phone quiet at night.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Adminstration

Level: Beginner

Session Code:

Date: January 24

Time: 11:45 AM - 12:45 PM

Room: Room 5

With the upcoming appearance of the SQL Server 2019, Microsoft is bringing the super-fast Batch Execution Mode to the processing of the big amount of data even for the traditional Rowstore Indexes on SQL Server 2019 and Azure SQL DB. 

Learn with me how and when it will function, and which challenges we shall meet on the path of making our workloads work blazingly faster, while also learning which cases one should be very careful about their application and usage.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Adminstration

Level: Intermedia

Session Code:

Date: January 24

Time: 4:15 PM - 5:15 PM

Room: Room 2

In this talk we will delve into the particularities of time series data. We will introduce what time series data is and which specific systems and services exist to support the management and analysis of time series data. Specifically, we will take a look at Azure Time Series Insights and its functionality. We will compare it (mainly) with the Open Source system InfluxDB and the TICK Stack utilizing a practical example which covers the setup and implementation of an analysis task and visualize the near real-time results accordingly.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Analytics

Level: Intermedia

Session Code:

Date: January 24

Time: 11:45 AM - 12:45 PM

Room: Room 4

Azure Cognitive Services allow developers to build powerful AI-based solutions, enabling different capabilities in our software: vision. speech, search, text analytics, language understanding, and much more. Basically, the model is already built by Microsoft, you just need to do an API call to the Azure cloud and the service retrieves a result. For instance, you send a message and the Text Analytics API returns its sentiment score.

However, there might be cases in which our customers need a local, non-cloud AI solution (either because of limited Internet access or data compliance). This is now possible thanks to the latest update of Azure Cognitive Services, which offers containerization support. Using containers, we can still deliver ML-driven solutions while keeping the data in-house.

In this talk, we'll explore what it takes to configure and use containers in Azure Cognitive Services. Demos will be showcased as well for local Face and Text Cognitive Services.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Development

Level: Intermedia

Session Code:

Date: January 24

Time: 9:15 AM - 10:15 AM

Room: Room 2

Data warehouse and BI market is evolving rapidly with the appearance of new cloud born technologies. We might assume, that moving an existing Microsoft based DWH to the cloud is an easy step, but when we dig a little bit deeper, we will see, there are many-many new technological choices and aspects on how to modernize an existing dwh/bi system in the cloud. Not to mention if we start everything from scratch in a new project designed specifically to the cloud to utilize cloud flexibility and innovation as much as possible. 
Which ETL tool should I use? Data factory v2 with SSIS and BIML, or Azure Databricks powered Dataflows? Or Power BI Dataflow? Which is the right decision to run OLAP workloads? Azure AS? Or simply Power BI? When do I need Azure SQL DWH?
In the last couple years I helped many customers to modernize their DWH landscape partially or fully in the cloud and during my presentation I will share my findings and recipes for the most common situation I met. You will have fun:)
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Intermedia

Session Code:

Date: January 24

Time: 9:15 AM - 10:15 AM

Room: Room 5

Common Data Model as the foundation of Power BI Dataflows and as part of the Open Data Initiative with SAP and Adobe, seems to be a pretty good move from Microsoft. We want to take a closer look to this approach. In this session we show how the Common Data Model will allow you to combine Self-Service ETL and Corporate Data Engineering. We will show you how Power BI and more specialised tools like ADF, DataBricks etc. can work together on the Azure Data Lake with one common model. We than extend this and show, what opportunities this standard brings to you, when we unleash the possibilities you have on managing Data Quality and Governance. We also will have a look in how you integrate CDM in a DataOps  Methodology.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Analytics

Level: Intermedia

Session Code:

Date: January 24

Time: 1:45 PM - 2:45 PM

Room: Room 1

Continuous Intelligence combines the terms of Continuous Integration and Business Intelligence and aims at defining and implementing processes to keep your implementation and deployment processes for your BI applications flexible and as seamless as possible.
Even in the near past, support for CI processes of BI projects was almost not there. But, the last few years brought some changes to the perception of this topic and shifted the mindset.
Let's look at advantages and challenges for CI in BI and at possibilities to implement such a process for Analysis Services.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Development

Level: Intermedia

Session Code:

Date: January 24

Time: 10:30 AM - 11:30 AM

Room: Room 5

In this session we will look at a couple of approaches to create a datalake on a budget. The samples will use Python, Spark and some Databricks. It will all be done in Azure, but we will discuss how you could set this up on-prem as well.

You get to decide how far you want to go, from cost-effective to penny pinching. Don't worry if you've never used any of these technologies, I will start at the beginning.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Intermedia

Session Code:

Date: January 24

Time: 4:15 PM - 5:15 PM

Room: Room 6

Two of the most popular modern topics are data science and Power BI. The nice thing is that you can easily combine both of them by including data science analyses in Power BI. You can do this in numerous ways. Many Power BI visualizations already include the Analytics tab with plethora of intermediate-level analytical functions available. For example, you can add a trend line and many other lines to the Scatter chart. You can use R and / or Python script sources. You can do the whole analysis in R or Python, and then visualize the results in Power BI. You can also use the good old SSAS Multidimensional Data Mining as the source. You can include Azure ML predictions in a Power BI model. With R and Python visuals, you can add the impressive visualizations from these two languages in a Power BI report and dashboard. You can also use R and Python in Power Query for advanced data manipulation. There are also many custom visuals that include data science analyses. This session introduces all
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Analytics

Level: Advanced

Session Code:

Date: January 24

Time: 4:15 PM - 5:15 PM

Room: Room 4

Big Data and SQL do not have a lot in common. However, over the last couple of years this changed and more and more people want to integrate the data from their Big Data systems into their SQL data warehouses. The most important technologies in the Big Data space are Spark as a technology itself and Databricks as a PaaS solution hosting it. These new tools may be frightening in the beginning but once you get to know them you will realize that they are quite similar to your regular SQL tools. And this is what this session is about - giving a regular SQL developer insights into Big Data and show how SQL can still be used to do Big Data processing with Spark and Databricks.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Beginner

Session Code:

Date: January 24

Time: 11:45 AM - 12:45 PM

Room: Room 2

As with all other items in your toolbox the datacenter (local or in the cloud) needs to be used correctly. 

This session will show the various types and sizes of workloads, show you how to categorize them, look at the requirements of your SLA (Service Level Agreement), and find the right location (cloud, datacenter, hybrid) for the data. To wrap things up, we look at ways to validate that the SLA can be fulfilled and how to estimate and compare the costs.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Cloud

Level: Beginner

Session Code:

Date: January 24

Time: 4:15 PM - 5:15 PM

Room: Room 1

Selected questions from the Power BI community (https://community.powerbi.com) will be discussed. All these questions are touching foundational concepts ranging from table iterator functions like SUMX, but also the scope of variables will be addressed. These questions provide some additional and unusual perspectives to some common and not so common problems.
Each question comes with its own slides documenting the underlying concepts and a separate PBIX file using additional explaining measures.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Analytics

Level: Intermedia

Session Code:

Date: January 24

Time: 11:45 AM - 12:45 PM

Room: Room 1

You developed a PowerBI or an Analysis Services Tabular model and you run it on a server with plenty of cores and memory. But your queries does not scale or your users are not happy with the performance! So what can you do? You can fine tune the settings of your AS Tabular (usually does not bring a large benefit), you can scale up (which is costly) or you can apply the techniques I am going to show you in this session. Techniques, that range from optimizing the storage of your model, to how to effectively implement DAX patterns for maximum performance. And all of that, complemented with digging into engine execution plans, DMVs, tracing activity and tabular engine internals.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Analytics

Level: Intermedia

Session Code:

Date: January 24

Time: 9:15 AM - 10:15 AM

Room: Room 4

In der Anwendungsentwicklung haben sich agile Entwicklungsmethoden wie DevOps, Continuous Integration, Continuous Delivery und Continuous Deployment mittlerweile weitgehend durchgesetzt. Dies hat zur Folge, dass entsprechende Mechanismen und Werkzeuge auch für die Datenbank benötigt werden. In vielen Unternehmen ist die Datenbank zu einem Flaschenhals in dem sonst agilen Entwicklungsprozess geworden. Datenbankspezialisten befinden sich unter stets wachsendem Druck, die Entwicklungszyklen zu verkürzen. In einer Datenbankumgebung, die sich ständig verändert und in der auch kurze Ausfälle sehr hohe Kosten nach sich ziehen können, ist wenig Raum für Fehler. Daher ist es umso wichtiger, agile Entwicklungsmethoden einzuführen, um einerseits schnellere Ergebnisse zu liefern und andererseits das Risiko zu minimieren. Dieser Vortrag befasst sich mit den Besonderheiten einer Datenbankumgebung und den daraus resultierenden Herausforderungen für die Einführung von agilen Methoden bei der Anwendung
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Development

Level: Beginner

Session Code:

Date: January 24

Time: 1:45 PM - 2:45 PM

Room: Room 2

Sessions Found: 36
Back to Top cage-aids
cage-aids
cage-aids
cage-aids