Build Schedule

Advanced Filters:
  • Track

    clear all
















  • Speaker


  • Level


  • Session Type


Sessions Found: 30
Moving databases and workloads to the cloud has never been easier. For Sql Server there is number of products that offer almost perfect feature parity. One of the last technical challenges is right security configuration. That's because security model in the public cloud is different and requires different approach, skillset and knowledge. This session covers governance, risk management and compliance in public cloud and specifically focuses on Azure Sql PaaS resources. It provides practical examples of network topologies with their strengths and weaknesses including recommendations and best practices for hybrid and cloud-only solutions. Explains orchestration and instrumentation available in Azure like Security Center, Vulnerability Assessment, Threat Detection, Log Analytics/OMS, Data Classification, Key Vault and more. Finally shows techniques to acquire knowledge and gain advantage over attackers like deception and chaos engineering.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Cloud Application Development & Deployment

Level: Beginner

Session Code:

Date: February 23

Time: 9:45 AM - 10:45 AM

Room: S8

Il nuovo .NET Core 3.0 supporta Windows Forms e WPF, che sono stati rilasciati come open-source: possiamo quindi aspettarci un nuovo impulso allo sviluppo di queste piattaforme. Nel corso di questa sessione mostreremo quali sono gli strumenti che abbiamo a disposizione per realizzare architetture client/server basate su accesso ai dati con .NET Core 3.0.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Application & Database Development

Level: Beginner

Session Code:

Date: February 23

Time: 3:35 PM - 4:35 PM

Room: S3

Azure Databricks is an  Apache Spark–based analytics service for big data and data analytics on top.
In this session we will create Databricks scenarios for useful business scenarios.

Data engineers and business analysts (data scientists) can now work on RDD structured files using workbooks for collaborative projects, using ANSI SQL, R, Python or Scala, easily covering both analytical and machine learning solutions on one hand, and also giving the capabilities to use it as a datawarehouse.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Advanced Analysis Techniques

Level: Intermedia

Session Code:

Date: February 23

Time: 2:30 PM - 3:30 PM

Room: S3

Microsoft's services in Azure helps us to leverage big data more easily and even more often accessible for non-technical users. Having UI in ADF version 2 - Microsoft added a new feature: Data Flow which resembles components of SSIS. This is a very user-friendly and non-code approach tool-set.
But, has that been only UI introduction? Why and how Databricks does work under the hood?
Do you want to know this new (still in private preview) feature of ADF and reveal the power of modern big data processes without knowledge of such languages like Python or Scala?
We will review this new feature of ADFv2, do deep dive to understand the mentioned techniques, compare them to SSIS and/or T-SQL and learn how modelled data flow runs Scala behind the scenes.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
BI Platform Architecture, Development & Administration

Level: Intermedia

Session Code:

Date: February 23

Time: 4:40 PM - 5:40 PM

Room: S8

Azure Data Integration:Choosing between SSIS, Azure Data Factory, Azure Databricks
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Cloud Data Platform

Level: Beginner

Session Code:

Date: February 23

Time: 11:15 AM - 12:15 PM

Room: S8

Lifting and shifting your application to the cloud is extremely easy, on paper. The hard truth is that the only way to know for sure how it is going to perform is to test it. Benchmarking on premises is hard enough, but benchmarking in the cloud can get really hairy because of the restrictions in PaaS environments and the lack of tooling.
Join me in this session and learn how to capture a production workload, replay it to your cloud database and compare the performance. I will introduce you to the methodology  and the tools to bring your database to the cloud without breaking a sweat.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Deployment

Level: Intermedia

Session Code:

Date: February 23

Time: 2:30 PM - 3:30 PM

Room: S8

“Data is the new oil” we heard it a lot of times. As IT professionals it’s very important to manage this fuel across our organizations, in order to properly feed “engines” in the hands of data analysts and data scientists.
Without being able to share and understand the same data easily, each application or data integration project requires a custom implementation, which can be expensive and potentially risky from a business users point of view.
Here is where the Common Data Model (CDM) comes in: we will understand CDM underlying concepts and try some hands-on integration with Microsoft Power Platform and Azure data services.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:

Level: Intermedia

Session Code:

Date: February 23

Time: 3:35 PM - 4:35 PM

Room: S1

Sei pronto a distribuire globalmente le tue soluzioni BigData e NoSql? Hai bisogno di un ridimensionamento trasparente e della replica dei dati ovunque si trovino gli utenti? Azure Cosmos DB è la soluzione che fa per te e vedrai in come è possibile sfruttarlo al meglio con degli esempi pratici.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Other

Level: Beginner

Session Code:

Date: February 23

Time: 4:40 PM - 5:40 PM

Room: S3

Let’s go beyond the standard visuals available in Power BI for making maps. In this session we won’t talk about Bing or ArcGIS services. We want to explore all the available features for creating custom maps without having to rely on existing ones. 
Do you know what is a shapefile? Do you know how to create your own choropleth and import it into Power BI? 
What else? R support into Power BI opened the doors to the huge number of packages for spatial data analysis and statistical calculations included in the environment. 
Do you want to draw multi-layered interactive maps, or perform spatial analytics?  With R in Power BI, now you can. 
Discover some custom visuals to overcome simple cartography: MapBox, IconMap, FlowMap. 
Last but not least, learn how to use Synoptic Panel, an awesome component that connects areas in a custom image with attributes in the data model and draws the data on a map. There are endless possibilities; the only limit is your imagination!
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Visualization

Level: Intermedia

Session Code:

Date: February 23

Time: 9:45 AM - 10:45 AM

Room: S1

IoT devices or sensors equipment are frequently disconnected from network, either for architectural design or for not predictable events. See how to architect and design a system that can manage this events and what are the main service and solution that we can choose from.
Speaker:

Accompanying Materials:

No material found.

Session Type:
Regular Session (60 minutes)

Track:
Application & Database Development

Level: Intermedia

Session Code:

Date: February 23

Time: 2:30 PM - 3:30 PM

Room: S6

In a real data mining or machine learning project, you spend more than half of the time on data preparation and data understanding. The R language is extremely powerful in this area. The Python language is a match. Of course, you do work with data by using T-SQL. You will learn in this session how to get data understanding with really quickly prepared basic graphs and descriptive statistics analysis. You can do advanced data preparation with many data manipulation methods available out of the box and in additional packages from R and Python. After this session, you will understand what tasks data preparation involves, and what tools you have in SQL Server suite for these tasks.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Advanced Analysis Techniques

Level: Advanced

Session Code:

Date: February 23

Time: 11:15 AM - 12:15 PM

Room: S3

Structured Streaming è il modulo di Stream Processing costruito sul motore Spark SQL. In poche parole garantisce l'esecuzione di un messaggio esattamente una volta, è scalabile e fault-tolerant. È possibile definire le analisi stream nello stesso modo in cui si definirebbe un calcolo batch sui dati usando i Dataset/DataFrame API in Scala, Java, Python or R utilizzando l'engine SQL di Spark.
Durante la sessione vedremo un'overview delle funzionalità e un esempio di di come sia possibile eseguire l'ingestion dei dati con Event Hub (Kafka enabled) eseguire un'analisi con Spark e salvare i risultati su Cosmos DB.
Speaker:

Accompanying Materials:

Session Type:
Regular Session (60 minutes)

Track:
Other

Level: Intermedia

Session Code:

Date: February 23

Time: 12:20 PM - 1:20 PM

Room: S3

Working in manufacturing industry means that you must deal with product failures. As a BI and/or Data Scientist developer, your task is not only monitor and report product’s health state during its lifecycle, but also predict the likelihood of a fail in the production phase or when product has been delivered to the customer. 
Machine Learning techniques can help us to accomplish this task.  Starting from past failure data, we can build up a predictive model to forecast the likelihood for a product to fail or giving an estimate on its duration. And now it is possible to develop an end-to-end solution in SQL Server, because of the introduction of advanced analytics tools like R since release 2016. 
In this session, we start from the real case of a manufacturing company to create some predictive models: a) Regression model ; b) binary and multivariate models. 
Binary or multi-class classification problem. Some reports are also created to deliver the outcome to the stakeholders.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Analytics and Visualization

Level: Advanced

Session Code:

Date: February 23

Time: 3:35 PM - 4:35 PM

Room: S6

Le Managed Instances di Azure SQL Database rappresentano il percorso ideale per migrare i tuoi workload SQL Server in cloud, quando hai necessità di coniugare le feature di un'istanza SQL Server "tradizionale" alle comodità offerte da SQL Azure Database.
Vediamo che cosa sono, e come possiamo agilmente spostarci sopra i nostri dati con pochi in maniera "DevOps-oriented".
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Deployment

Level: Intermedia

Session Code:

Date: February 23

Time: 12:20 PM - 1:20 PM

Room: S8

L'utilizzo di Proof-of-Concept nei progetti BI è una pratica ormai molto diffusa. La possibilità di toccare con mano un prototipo navigabile in tempi stretti consente sia al cliente, sia al fornitore, di raffinare i requisiti in corso d'opera limitando il tempo speso in fase di analisi iniziale. Il Cloud rappresenta un ambiente ideale per lo sviluppo di PoC in quanto, riducendo drasticamente il tempo di provisioning delle risorse, permette agilmente di provare strade alternative valutandone il rapporto costi/benefici. In questa sessione (che è quasi una scommessa) proveremo, armati solamente di una subscription Azure e di un web browser, a tradurre in un modello navigabile dati grezzi forniti da un ipotetico cliente.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
BI Platform Architecture, Development & Administration

Level: Advanced

Session Code:

Date: February 23

Time: 9:45 AM - 10:45 AM

Room: S6

With SQL Server and Cosmos Db we now have graph databases broadly available, after being studied for decades in Db theory, or being a niche approach in Open Source with Neo4J.
And then there are services like Microsoft Graph and Azure Digital Twins that give us vertical implementations of graph.
So let's make a walkaround of graphs in the MIcrosoft ecosystem.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
DevOps/ Developer

Level: Intermedia

Session Code:

Date: February 23

Time: 11:15 AM - 12:15 PM

Room: S6

Oltre ai dati gestiti a vario titolo dalle diverse applicazioni che costituiscono il sistema informativo dell'Azienda, in SQL Server è presente una considerevole quantità di metadati (dati che descrivono i dati), che illustrano come sono stati creati e come sono gestiti i nostri databases, oltre a mostrare le proprietà che contraddistinguono ogni oggetto ospitato dall'istanza di SQL Server. Questa sessione vuole illustrare come questa collezione di informazioni sia preziosa per ottenere una gestione di elevata efficienza della nostra piattaforma dati.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Enterprise Database Administration & Deployment

Level: Advanced

Session Code:

Date: February 23

Time: 12:20 PM - 1:20 PM

Room: S7

Dieci piccole regole, Best Practice, tips e tools, che renderanno più efficaci ed efficienti i vostri progetti di Power BI. Dieci suggerimenti, da Power Query alla modellazione sino ai criteri per scegliere i visuals più adatti alle proprie esigenze, frutto dell'esperienza sul campo e del confronto con alcune tra le più autorevoli voci nel panorama di Power BI.
Speaker:

Session Type:
Regular Session (60 minutes)

Track:
Analytics and Visualization

Level: Intermedia

Session Code:

Date: February 23

Time: 2:30 PM - 3:30 PM

Room: S1

Le ottime prestazioni del motore di Power BI (VertiPaq), unite al suo elevato fattore di compressione dei dati, spesso rendono superflua l’attività di ottimizzazione dei modelli dati realizzati con Power BI desktop.
In realtà questa è una buona prassi che dovrebbe sempre essere prevista nel ciclo di sviluppo dei nostri modelli indipendentemente dalla loro dimensione e complessità.
In questa sessione, dopo una breve introduzione teorica sul funzionamento di VertiPaq e dei suoi algoritmi di compressione, ci concentreremo su alcune best practices da seguire per ottimizzare i nostri modelli e sugli strumenti a nostra disposizione per verificare l’effettivo livello di ottimizzazione degli stessi. Inoltre, vedremo come sia possibile raccogliere, tramite DMV, tutte le informazioni utili delle strutture dati dei nostri modelli utilizzando Power BI desktop, fino a realizzare una versione “Power BI” del famoso tool Vertipaq Analizer.
Speaker:

Session Type:
Extended Session (90 minutes)

Track:
BI Platform Architecture, Development & Administration

Level: Intermedia

Session Code:

Date: February 23

Time: 12:20 PM - 1:50 PM

Room: S1

La Market Basket Analysis è una metodologia che permette l’identificazione delle relazioni esistenti tra una vasto numero di prodotti acquistati da differenti consumatori. Nasce come tecnica di Data Mining per supportare il cross-selling e il piazzamento a scaffale dei prodotti; ma è anche utilizzata per diagnosi mediche, nella bioinformatica, in analisi della società in base a dati anagrafici, ecc.
In questa sessione vedremo come i nuovi Machine Learning Services ci permettono di ricavare gli insight di questa analisi direttamente in SQL Server 2017, utilizzando il linguaggio di programmazione R.
Speaker:

Session Type:
Extended Session (90 minutes)

Track:
Advanced Analysis Techniques

Level: Advanced

Session Code:

Date: February 23

Time: 9:45 AM - 11:15 AM

Room: S3

Sessions Found: 30
Back to Top cage-aids
cage-aids
cage-aids
cage-aids