Quantcast
Channel: The Microsoft MVP Award Program Blog
Viewing all 788 articles
Browse latest View live

Thanks for a great MVP Global Summit!

$
0
0

Thank you to all the MVPs who converged on Microsoft World Headquarters last week to make this the biggest and one of the best MVP Global Summits ever!
 
Nearly half of the 4,000 worldwide MVPs took time away from their very busy work and home lives to sit down and join in deep, technical conversations with hundreds of Microsoft engineers and executives from November 2nd through 5th. And on the Saturday before the Summit even started, more than 20 members of the community gathered in a wet field northeast of Redmond to help collect produce for nearby foodbanks. Organized by Visual Studio and Development Technologies MVP Paul Litwin, the post-harvest gleaning for Hopelink surpassed the organization’s goals for the day.

The 2015 MVP Global Summit officially kicked off with a warm welcome to the community by Steve Guggenheimer (Guggs), corporate vice president of Microsoft’s Developer eXperience & Evangelism (DX) group. Then the community launched into more than 600 highly technical, NDA sessions all over the Microsoft campus. They were joined by engineering and marketing teams from across Microsoft, including executives such as corporate vice president of the Developer Division, Brian Harry, corporate vice president of the Microsoft Azure team, Jason Zander, and corporate vice president of the Enterprise Client & Mobility team, Brad Anderson.

Throughout the event, members of the community shared their tips, tricks and technology perspectives in interviews with Channel 9—both in studio and at a pop-up set at Microsoft’s Executive Briefing Center.

Feedback on Twitter for the 2015 MVP Global Summit has been overwhelmingly positively:

 Evaluations for the event are underway. If you were at the Summit and haven’t filled yours in yet, there is still time! We’ll be taking feedback until this Friday the 13th—go to this link.

Thank you again to all the MVPs who made this such a great Summit. If you weren’t able to come this time—we missed you! And we hope to see you next year.


And now, your Friday Fives!

$
0
0

Cloud and Datacenter Management MVP Deb Shinder @debshinder: Herding Cats in the Cloud

Cloud and Datacenter Management MVP Richard Hicks @richardhicks: Enterprise Nirvana with Surface Pro 4, Windows 10, and DirectAccess

Business Solutions MVP Yaniv Arditi @YanivRDT: Why Should You Always Debug with the Plugin Registration Tool

Windows and Devices for IT Kevin Kaminski @kkaminsk: Microsoft Most Valuable Professionals

Business Solutions MVP Paul Mather @pwmather: #ProjectOnline Resource Managements Feature Part 2
 

 

MVPs from around the world come together again for MMS 2015 in Minnesota!

$
0
0

Once again MVPs were front and center at the second annual Midwest Management Summit (MMS) 2015, a premier systems management community event which was held last week on November 9, 10 and 11 in Minneapolis, Minnesota.

MVPs from across the United States and around the world were involved in making the event a huge success! There were a total of 138 sessions, which also included early morning birds of a feather sessions. MVPs made up well over half of the 75 speakers roster, which also included Microsoft engineers. The community-driven event attracted more than 500 attendees.

Here’s what some of the attendees had to say:

 “Friendly, knowledgeable and approachable MVPs giving back to the community” “The pool of speakers were great. I like that we have a diverse set of people that work with customers everyday and share some of their experiences. It’s also great that there is a strong Microsoft presence at MMS. It makes for a very well rounded conference.”

  “Great content, great networking, truly a fun/engaging/educational event”

  “An amazing opportunity to hear from the community’s biggest contributors.

  “Great content, great speakers, great way to meet people who have experience in these products we all so passionate about.”

  “The in depth technical knowledge available. Most presenters were excellent. The collection of MVPs, MS engineers combined with the detail level of presentations isn’t something you get at every conference.”

  And here’s feedback from some MMS 2015 speakers:

  “Small event, everybody can talk to each other and the Q&A makes the event special.”

  “The attendees are a really interactive and happy group. Makes presenting and attending sessions a lot more engaging and fun.”

  “You’re doing a fantastic job at reviving what was great about the Microsoft MMS conference that no longer exists. It’s small enough to bring high value to the attendees and the speakers. And the audience was ENGAGED and at an appropriate skill level for the content I was presenting! I have delivered content to crowds that didn’t volunteer responses to the questions asked, but at this event, the Q&A and discussions between speakers and attendees was just wonderful!”

Midwest Management Summit organizer Enterprise Mobility MVP Brian Mason also just announced the MMS conference will return to the Mall of America next year on May 17, 18 and 19,. “This helps get us away from the cold of November, gives us the opportunity to teach folks how to stay current with Win10, and we avoid conflicting with other conferences held during the year,” Brian explained.

Check out #mmsoa on Twitter for more information and photos from the event

#FridayFives!

$
0
0

Cloud and Datacenter Management MVP Doug Finke @dfinke: More Data Wrangling with PowerShell

Cloud and Datacenter Management MVP Trevor Sullivan @pcgeek86: Authoring Custom PowerShell Intellisense with Trevor Sullivan

Office Servers and Services MVP Haarón González @haarongonzalez: Installing SharePoint 2016 Preview in Single Server Farm Mode

Cloud and Datacenter Management MVP Cameron Fuller @ CFullerMVP: Targeting OMS solutions to specific systems in Operations Manager

Visual Studio and Development Technologies MVP Richard Hundhausen @rhundhausen: Microsoft, please give us a Scrum process template

Japanese MVPs Help the Enterprise Get the Most from Office 2016

$
0
0

MVPs helped deliver a great Office 2016 learning experience to Microsoft partners and customers recently during an Enterprise Ready Workshop in Japan.

More than 100 participants from a range of industries—including representatives from Konica Minolta Inc., Yamaha Motor Co. and ITOCHU Corp.—gained insights from 15 Japanese Office MVPs. The ten highly interactive sessions focused on the features and capabilities of Office 2016.

The workshop was extremely well received and many participants asked for more sessions like it in the future. 

Automating Deployment on Azure Web Apps

$
0
0

Editor’s note: The following post was written by Windows Development MVP Matias Quaranta as part of our Technical Tuesday series.

In today’s agile development teams we require more than the old deployment methods like FTP or WebDeploy where one member had the “Deploy” task on the delivery pipeline. I’ll go as far as to say that an automated integration and deployment strategy is a necessity to any team that wants to focus on what really matters, writing code and building products.

Opening concepts

Before we talk about Azure’s implementation and mechanics, let’s define clearly the concepts of Continuous Integration, Continuous Delivery, and Continuous Deployment. On a common scenario, the development team interacts with a source control repository by committing changes to the repository files, Continuous Integration refers to automating the processes involved in integrating the changes that each developer makes into the repository and verifying that these changes don’t break the project. Once the project is validated, Continuous Delivery involves automating the process by which the project’s code is rapidly and safely deployed to a staging environment where you are confident that, if you choose, that code can be deployed on a production environment. Continuous Deployment refers to automating this last step and ensuring that the code is delivered to production without incurring in downtime (or reducing this downtime to a minimum without breaking a possible SLA).

Integrating repositories

From the beginning, the Azure Web Apps service allows for automatic integration with source control repositories (GitHub, BitBucket, Visual Studio Online among the most used). This integration allows Azure to detect changes that development teams commit by registering WebHooks on said repositories. These WebHooks notify Azure once the selected branch has received new commits/pushes, Azure pulls the latest source code version and proceeds to compile / deploy it.
To create this integration, we just need to go into the Settings panel of our Azure Web App and select “Continuous Deployment”.


 
On the next screen we can select which provider suits us better, validate our credentials and pick the branch of the repository on which we want the deploy process to be based on. 


 
Once configured, Azure will make a first pull of the source code, detect what kind of Web App it is (.Net, Node.js, Python, etc.) and build / validate it. Note that this validating process does not (at this time at least) run any kind of Unit Testing, it will basically try to “build” the project, if the building fails, that version of the source code won’t be deployed. 
 

Once the first deploy is done, we have successfully configured an automated integration from our repository to our Web App, future commits/pushes on the repository will trigger new pulling/building/deployment process. One interesting feature is that Azure allows us to go back to previous successful deployments if we need to. 


 
The responsible for this process is called Kudu, and it’s the environment on which Azure Web Apps “live”. Kudu is an Open Source project whose main objective is the automation of Git/Mercurial deployments on Azure, but also provides extensions that add additional features like logging and analytics or integration with third-party services.

Introducing Deployment Slots

We might be tempted to think this is the correct way of creating a Continuous Deployment pipeline for a production level application, but we would be making a potential mistake, if we go back to our concepts, we’ll see that to achieve Continuous Deployment we first need to achieve Continuous Delivery.
The scenario that we just configured will deploy new code versions directly to our production environment, but like we probably know by experience, a fresh application requires some sort of initialization sequence on its first requests, this is known as a cold start and can probably cause downtime when it occurs.
We need to deliver our code to a staging environment before, that’s where Deployment Slots come in. With Deployment Slots you can create environments that run in the same instances as your Azure Web App (no additional cost) but have different URLs, storage, application and integration settings. You can effectively create different versions of your Web App, each integrated with different repository branches if you needed.
In our scenario, we’ll create first a Deployment Slot called staging: 


 
Then, we’ll set the integration with our repository like we did before. You might wonder, if we leave both production AND staging integrated, changes will be deployed to both, and you’d be right. That’s why we must Disconnect the repository from our production environment: 


 
Once done, all code changes will be detected by our staging environment, deployed there and we can test and view them by accessing their own private URLs (commonly the URL pattern is http://mysite-{slot_name}.azurewebsites.net). If we are happy with the changes, we can Swap the slots, effectively sending the Staging version to Production and achieving Continuous Delivery. 


 

AN IMPORTANT NOTE ABOUT SWAPPING

Up until this point, we managed to integrate our repositories and configure slots to achieve Continuous Delivery. The next step is to automate the Swapping process. On the Settings blade, you’ll find the Auto Swap feature. 


 
The Auto Swap process will kick in once the pull from the repository is built and ready. It will first make an HTTP Request to the Web App in our staging slot and wait for an HTTP OK (200), if it receives it, it will automatically swap the slot contents with the selected destination (production most likely). This HTTP Request is done to provide a warm up process of the Web App and avoid cold starts once the swapping is made.

Conclusion
Following these steps you will effectively achieve Continuous Deployment and incur no extra charges on your Web App, creating a failsafe deploy pipeline and letting your development team focus on your core activity.

About the author
Matias is a Microsoft MVP, Azure & Web Engineer, open source contributor and firm believer in the freedom of knowledge.

 

What’s new in App-V version 5.1

$
0
0

Editor’s note: The following post was written by Windows and Devices for IT MVP Ralph Jansen as part of our Technical Tuesday series with support from his technical editor, Windows and Devices for IT MVP Tim Mangan.

App-V 5.1 is the latest and greatest version of App-V and is released with Microsoft Desktop Optimization Pack 2015 R1.

Microsoft Application Virtualization (App-V) eliminates application installations and abstracts applications from the operating system to prevent application conflicts. Virtual applications appear to the user just like any other application.

It transforms applications into centrally-managed virtual services that do not conflict with other applications. This accelerates application deployment, upgrades, patching and retirement by eliminating time-consuming processes and simplifying the application management lifecycle.

Version 5.1 is built on many of the features introduced in version 5 and offers several excellent new features. The following list summarizes the new functionality in 5.1, when compared to 5.0 SP3.

Support for Windows 10

Windows 10 is the latest Windows version released in mid-2015. The App-V client, RDS client and Sequencer are supported from version 10.0.10041.

The Management Console User Interface no longer requires Silverlight but is built on HTML5 and JavaScript

Prior to App-V 5.1, the Management GUI was built on Silverlight. In 2013 Microsoft announced that Silverlight will be discontinued and will be replaced by HTML5. In version 5.1 the Management GUI is rewritten in HTML5 and JavaScript and has a modernized theme. The advantage is that it’s fully compatible with Internet Explorer, Spartan, Chrome and Firefox without having to install any additional plugins. 
 

Better navigation in the Management Server console
The Management Server console adds several improvements to simplify navigation. Every console page now has its own URL, making it easier to bookmark pages for quick access in the future.
Admin: http://ManagementServer/Admin
Servers: http://ManagementServer/Servers
Connection Groups: http://ManagementServer/ConnectionGroups
Packages: http://ManagementServer/Packages

Every package and Connection Group now has a number so it can easily be bookmarked. For example: http://localhost:8080/Console/Packages/Package/3006
http://localhost:8080/Console/ConnectionGroups/ConnectionGroup/1005

The Connection Groups page is now part of the main menu of the Management Server console, at the same level as the Packages page.

In the previous version, the Connection Groups icon was displayed when clicking on the packages part of the menu. This has been replaced by a separate icon in the main menu.

Optimization of the Management Server console.

The new Management GUI has been optimized. Buttons have moved and notifications are displayed next to a flag icon. It now has a more logical interface for virtual application delivery, managing Connection Groups and changes the way it displays alerts; it also gives you more control over those alerts and messages, letting you choose the messages you want to see.

Icons in the Notification area may occasionally display messages in a flag icon balloon. Many messages just provide some feedback and don’t require any response from you, as in the example in the next figure. That kind of message generally fades away on its own after a few seconds. But you can also close the message by clicking the Close (X) button in its upper-right corner or the Dismiss All button. 
 

Enhanced export and import functionality within the sequencer

One of the new features of the App-V sequencer is the ability to import and export files or a directory to the virtual file system when editing a package in the sequencer.

On the Virtual Registry tab it is possible to import a .REG file with registry keys. The new Replace feature makes it easy to replace a value in the virtual registry, previously, the “Replace” dialog box did not exist, and you had to make changes manually. 

 From the Advanced tab it is a now possible to import and export the AppxManifest.xml file. This can be used to make changes like removing shell extensions or edit file types.

Enable or disable Browser Helper Objects
A Browser Helper Object (BHO) is a plugin for Microsoft’s Internet Explorer to provide added functionality. When a BHO is recognized by the sequencer it can be disabled from the Advanced tab.

App-V 5.1 supports the use of multiple scripts on a single event trigger for App-V packages
App-V 5.1 supports the use of multiple scripts on a single event trigger for App-V packages, including packages that you are converting from App-V 4.6 to App-V 5.0 or later. This setting enables greater flexibility by the use of multiple scripts.

The single event trigger uses a script launcher application, named ScriptRunner.exe, which is installed as part of the App-V client installation to run multiple scripts. 
 

When using ScriptRunner.exe, make sure that the location where ScriptRunner.exe is located is in the PATH variable of the system.

You can now enable that only Administrators can publish App-V packages
App-V 5.1 has a new option, RequirePublishAsAdmin, that enables that only users with administrative rights can publish applications. This offers granular control over who can publish packages on a system.

Package conversion improvements
The package converter, that can convert packages from the 4.x version to the 5.x version, is enhanced so it can convert scripts and registry information from source .OSD files.

You can now add information from a subset of the .OSD files in the source directory to the package using the –OSDsToIncludeInPackage parameter.

These are just some of the new and enhanced features in App-V 5.1. With the new features, it is undoubtedly a marked improvement over previous versions.

About the Author
 
Ralph is a Technical Consultant from the Netherlands. He has been working with Application Virtualization since 2007 and expanded his knowledge in the past years. Since 2011 Ralph shares his knowledge through his own blog ( http://ittechlog.wordpress.com ) where he posts technical articles. In 2012 he was the winner of Microsoft’s App-V “Package Accelerators Contest”. Ralph spends as much time as possible with the App-V community through Twitter (RalJans) and Technet Forums, where he helps and keeps in touch with fellow Application Virtualization engineers. His contributions to the community led to an MVP award in April 2014. In 2015 his first book came out, Microsoft Application Virtualization 5.1: Learn to design, implement, upgrade and configure App-V 5.1

 

 

Cloud and Data Friday Fives!

$
0
0

Cloud and Datacenter Management  MVP Matt Graeber @mattifestation: Investigating Subversive PowerShell Profiles

Cloud and Datacenter Management  MVP Symon Perriman @SymonPerriman: How to Build your Business using the Microsoft Partner Network

Cloud and Datacenter Management  MVP John Joyner @john_joyner: Manage Linux Computers from Microsoft Azure with OMS Agent for Linux

Cloud and Datacenter Management  MVP Jeremy Moskowitz: Wubba heck is WUB (Windows Update for Business)

Cloud and Datacenter Management MVP Deb Shinder @debshinder: Managing Azure VMs with System Center Virtual Machine Manager 2012 R2 (Part 3)


MS FEST 2015 rocked the Czech Republic!

$
0
0

Organized by Microsoft Most Valuable Professionals (MVPs) and Microsoft Student Partners (MSPs) for the ninth time this year, the biggest technical conference in the Czech Republic, MS FEST, took place on October 10-11 in Brno and November 28-29 in Prague.

Held at two popular universities in the country’s largest cities, the event attracted 850 participants and delivered 97 deep technical sessions as well as three hands-on workshops. Fifty-nine MVPs, MSPs, and members of Microsoft delivered content along 19 tracks. The event, which featured Microsoft devices, products and technologies, as well as online MVA content, was well received by the students.

In the spring of 2016, this conference will be held in the same format in Bratislava, Slovakia as a joint event of Czech and Slovak MVPs and MSPs.

All sessions were recorded and most of the videos are already available on Channel9.

 

 

 

 

Here’s your #FridayFives!

$
0
0

Interop on the Road in Brazil

$
0
0

MVPs from Brazil recently climbed aboard the Interop on the Road bus and set off on an unprecedented and innovative technology roadshow. Supported by partners such as Dell, Microsoft, Daruma and Tecmundo, Interop on the Road is the brainchild of Windows and Devices for IT MVPs Andre Ruschel and Jorge Vera, Windows Development MVP Claudenir Andrade, Enterprise Mobility MVP Rafael Silva and the expert Roberto Fonseca. Altogether, the roadshow spanned four states of Brazil and reached 500 participants. 1

Besides the organizers, the event also featured a strong team of speakers, including MVP Lead Fernanda Saraiva, Visual Studio and Development Technologies MVP Andre Carlucci, Technical Evangelists Osvaldo Daibert and Marlon Light, Open Source Lead Alessandro Jannuzzi and Audience Evangelism Manager Rodrigo Dias.

The journey began on November 16 in Porto Alegre-RS at the University PUC where participants were divided into two rooms, one focused on content for developers and another one for IT infrastructure.

The second stage was hosted in UNISUL Palhoça-SC on November 18, and the following day, the event was held in Curitiba-PR, and took place at the University.

On the final leg of the journey, the team was welcomed by the director of the University PUC in São Paulo. There they planted a tree to offset some of the carbon dioxide that the bus engine emitted.

The MVP organizers are already planning Interop on the Road 2016, so stay tuned! 2

 

 

  4

 

Getting Started with .NET and Docker

$
0
0

Editor’s note: The following post was written by Microsoft Azure MVP Elton Stoneman as part of our Technical Tuesday series with support from his technical editor, Microsoft Azure MVP Richard Seroter.

 

Docker lets you build and run apps, more quickly, with less hardware. That’s why application container technology (and Docker in particular) is so popular. You can run hundreds of containers on a server which could only run a handful of VMs, and containers are fast to deploy and can be versioned like the software they run.

 

If you work in the Microsoft space, you might think containers are The Other Guy’s technology (think LAMP, Node or Java), but actually you can run .NET apps in Docker containers today. In this post, I’ll show you how.

 

Application Containers

 

Application containers are a fast, lightweight unit of compute that let you run very dense loads on a single physical (or virtual) server. A containerized app is deployed from a disk image which contains everything it needs, from a minimal OS to the compiled app. Those images are small – often just a couple of hundred megabytes – and they start up in seconds.

 

Docker have led the way in making containers easy to build, share and use. Containers have been the hot topic for the last couple of years, and with good reason. They’re so light on resources that you can run hundreds or thousands of containers on a single server. They’re so fast to ship and run that they’re becoming a core part of dev, test and build pipelines. And the ease of linking containers together encourages stateless architectures which scale easily.

 

If the current pace of adoption continues, it’s likely we’ll see application containers as the default deployment model in the next few years. Container technology is coming to Windows soon, but with .NET Core you can build containers that run .NET apps right now, so it’s a great time to start looking into an exciting technology.

 

Not Just for the Other Guy

 

Containers make use of core parts of the Linux kernel, which lets apps inside containers make OS calls as quickly as if they were running on the native OS. For that you need to be running Linux inside the container, and using Linux to host the container.

 

But the Linux host can actually be a virtual machine running on Windows or OS/X – the Docker Toolbox wraps all that up in a neat package and makes it easy to get started with containers on Windows. It’s a single download that installs in a few minutes, it uses VirtualBox under the hood but it takes care of the Linux VM for you.

 

Microsoft is working hard to bring containers to Windows without needing a Linux VM in between. With Windows Server 2016 we will be able to run Docker containers natively on Windows, and with Windows Nano Server we’ll have a lightweight OS to run inside containers, so we can run .NET apps on their native platform.

 

.NET Core

 

Right now, we can make use of the cross-platform .NET Core project to package apps and run them in Linux containers. Docker has a public registry of images, and I’ve pushed some sample .NET Core apps there. Once you’ve installed Docker, you can try out a basic .NET Core app with a single command:

 

docker run sixeyed/coracle-hello-world

 

When you first run that command, the container image gets downloaded from the Hub and it will take a few minutes. But next time you run it, you’ll already have the image saved locally and it will run in seconds. The output tells you the current date and time:

1

 

Which might not seem very impressive – but it’s using Console.WriteLine() in a .NET app, and the container which runs the app is running Ubuntu Server. So we have a .NET app which runs on Linux, packaged as a Docker container, that you can run just about anywhere.

 

.NET Core is the open-source version of .NET which is available now. It has a different focus from the full .NET and it’s a modular framework, so you only include the parts you need – the framework itself is composed from NuGet packages. The ASP.NET 5 docs give a good overview of the different frameworks in Choosing the Right .NET for you on the Server.

 

Before you can run .NET Core apps on a Linux (or OS/X, or Windows) machine, you need to install the DNX runtime. This isn’t the full .NET runtime that we have on Windows; it’s a slimmed-down .NET Execution Environment (DNX). You can read the Book of the Runtime to find out how it all fits together, but you don’t need a deep understanding to start packaging .NET Core apps for Docker.

 

When you define a Docker image, you start from an existing base image, and the sixeyed/coreclr-base image which is publically available on the Hub already has the DNX installed and configured. To containerize your own .NET Core app, just use that base image and add in your own code. In the next part of the post, we’ll walk through doing that.

 

The Uptimer App

 

On GitHub I have a simple .NET Core app which pings a URL and records some key details about the response in an Azure blob – like the response code and the time it took to get the response. The code is available from my coreclr-uptimer repository.

 

This is a .NET Core app, so it has a different structure to a classic Visual Studio solution. There’s no solution file, the Sixeyed.Uptimer folder is the solution, and there’s a project.json file which defines how the app runs and its dependencies. Here’s a snippet from that JSON:

 

“frameworks”: {

“dnxcore50″: {

“dependencies”: {

“Microsoft.CSharp”: “4.0.1-beta-23516″,

“WindowsAzure.Storage”: “6.1.1-preview”,

“System.Net.Http”: “4.0.1-beta-23516″

 

Here we’re saying that the app runs under dnxcore50, the latest DNX framework version, and those dependencies are all NuGet packages. .NET Core apps use NuGet just like normal apps, but you can only reference packages which are built for .NET Core – like these preview versions of the WindowsAzure.Storage and System.Net.Http packages.

 

The code which makes use of those packages is standard C#, and that includes all the usual good stuff like timers, disposables and AAT (Async, Await and Tasks), so I ping the URL like this:

 

var request = new HttpRequestMessage(HttpMethod.Get, url);

using (var client = new HttpClient())

{

return await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);

}

 

And then write the output to Azure, appending it to a blob like this:

 

var blockId = Convert.ToBase64String(Guid.NewGuid().ToByteArray());

using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(content)))

{

stream.Position = 0;

await blockBlob.PutBlockAsync(blockId, stream, md5, access, options, context);

}

 

The project.json file means we can build and run this app from source code on any machine with a DNX – we don’t need to build it into a platform-specific executable. So this code will run on Windows, OS/X or Linux, and that means we can package it up as Docker container.

 

Containers are defined with a Dockerfile which encapsulates all the steps to build and run the image. Typically, you create a folder for the container definition, which has the Dockerfile and all the files you need to build the image – like the Sixeyed.Uptimer folder which has the source code for the .NET Core app we want the container to run:

2

 

When we build the container, we’ll produce a single image which contains the compiled app, ready to run. So in the container definition I need to tell Docker to use the .NET Core base image and copy in the source files for my own app. The syntax for the Dockerfile is pretty self-explanatory – the whole thing only takes 7 lines:

 

FROM sixeyed/coreclr-base:1.0.0-rc1-final

MAINTAINER Elton Stoneman <elton@sixeyed.com>

 

# deploy the app code

COPY /Sixeyed.Uptimer /opt/sixeyed-uptimer

 

When Docker builds the image, at this point we’ll just have an Ubuntu server image, with .NET Core installed (which is what the base image gives us), and the app files copied but not built. To build a .NET Core app, you first need to run dnu restore, which fetches all the dependencies from NuGet – which we can do with a RUN statement in the Dockerfile:

 

WORKDIR opt/sixeyed-uptimer

RUN dnu restore

 

(The WORKDIR directive sets the current working directory for the image, so dnu will restore all the packages listed in the project.json file in the app folder).

 

At this point in the build, the image has everything it needs to run the app, so the final part of the Dockerfile is to tell it what to run when the container gets started. I add the location of DNX to the path for the image with the ENV directive. Then the ENTRYPOINT directive tells Docker to execute dnx run in the working directory when the container is run:

 

ENV PATH /root/.dnx/runtimes/dnx-coreclr-linux-x64.1.0.0-rc1-final/bin:$PATH

ENTRYPOINT [“dnx”, “run”]

 

That’s it for defining the container. You can build that locally, or you can download the version which I’ve published on the Docker Hub – sixeyed/coreclr-uptimer – using the docker pull command:

 

docker pull sixeyed/coreclr-uptimer

 

The .NET app takes two arguments: the URL to ping and the frequency to ping it. It also needs the connection string for the Azure Storage account where the results are stored, and it looks for that in an environment variable called STORAGE_CONNECTION_STRING.

 

The connection string is obviously sensitive, so I keep that out of the image I build. Using an environment variable means we can pass the information to Docker directly when we run a container, or put it in a separate, secure file.

 

I can run an instance of the container to ping my blog every 10 seconds and store the output in my storage container with this command:

 

docker run -e STORAGE_CONNECTION_STRING=’connection-string’ sixeyed/coreclr-uptimer https://blog.sixeyed.com 00:00:10

 

And I can run that from any Docker host – whether it’s a development laptop, a VM in the local network, or a managed container service in the cloud. Whatever the host, it will run exactly the same code.

Architecting Solutions with Docker in Mind

 

That app does one small piece of work, so what value does it really have? Recording a response time for one website hit every few seconds isn’t much use. But this project evolved from a real problem where I wasn’t happy with the original solution, and Docker provided a much better approach.

 

I delivered a set of public REST APIs for a client, and a large number were business critical so we wanted them pinged every few seconds, to get instant feedback if there were any problems. The commercial ping providers don’t offer that frequency of service so we wrote our own:

3

 

The main code running in the tasks is the same as the .NET Core app from this post – but when you write it as a monolithic app for handling multiple URLs, you add a whole lot of complexity that isn’t really about solving the problem.

 

You need storage to record the schedule for pinging each URL; you need a scheduler; you need to manage multiple concurrent tasks; you need a mechanism to know if the app has failed. Suddenly the monitoring component is big and complex, and it needs to be reliable which makes it bigger and more complex.

 

Enter the Docker option, which is a much cleaner alternative. I extracted the core code to a simple .NET app which just pings one URL and records the responses – which is all it needs to do. Now we have a minimal amount of custom code (about 100 lines), and we can use the right technologies to fill in the other requirements.

 

To ping multiple URLs, we can fire up as many concurrent instances as we need and Docker will manage resource allocation. And if we use a cloud-based host for the Docker machine, we can scale up and scale down depending on the number of containers we need to run. Technologies like Mesos and Kubernetes provide a management layer which integrates well with Docker.

 

These lines from a sample shell script fire up container instances which run in the background, each responsible for monitoring a single domain. This will ping a.com every 10 seconds, b.com every 20 seconds and c.com every 30 seconds:

 

docker run -i -d –env-file /etc/azure-env.list sixeyed/coreclr-uptimer http://a.com 00:00:10

docker run -i -d –env-file /etc/azure-env.list sixeyed/coreclr-uptimer http://b.com 00:00:20

docker run -i -d –env-file /etc/azure-env.list sixeyed/coreclr-uptimer http://c.com 00:00:30

 

The –env-file flag tells Docker where to find the file with the storage connection string, so we can keep that safe. The -i and -d flags tell Docker to run the container in the background, but keep the input channel open, which keeps the .NET app running.

 

The work to check one domain is very simple, as it should be. To run more checks, we just need to add more lines to the shell script, and each check runs in its own container. The overhead of running multiple containers is very small, but with the first RC release for .NET Core, there is a startup delay when you run an app.

 

To see how far it scales, I fired up an VM in Azure, using the Docker on Ubuntu Server image from the gallery, which already has the latest version of Docker installed and configured. I used a D1-spec machine which has a single core and 3.5GB of memory allocated.

 

Then I pulled the coreclr-uptimer image and ran a script to start monitoring 50 of the most popular domains on the Internet. The containers all started in a few seconds, but then it took a few minutes for the dnx run commands in each container to start up.

 

When the containers had all settled, I was monitoring 50 domains, pinging each one at 10-30 second intervals, with the machine averaging 10-20% CPU:

4

 

This is a perfect use-case for containers, where we have a lot of isolated work to run, each of which is idle for much of the time. We could potentially run hundreds of these containers on a single host and get a lot of work done for very little compute cost.

 

Developing .NET Core Apps

 

You can build and run .NET apps on Mac and Linux machines, but you can also develop on those platforms too. Visual Studio Code is a cut down alternative to the full Visual Studio, which is able to work with the .NET Core project structure, and gives you nice formatting and syntax highlight for .NET and other languages, like Node and Dockerfiles:

5

 

You can even use a standard text editor for developing .NET Core. The OmniSharp project adds formatting and IntelliSense for .NET projects using popular cross-platform editors like Sublime Text:

6

 

But both of those projects are in their early days, and I’ve had mixed results with IntelliSense. Fortunately, when you install ASP.NET 5 on a Windows machine with Visual Studio, you can use the full VS to build, debug and run .NET Core code.

 

Visual Studio 2015 can work with project.json files and load the source files into a solution. You get all the usual IntelliSense, debugging and class navigation for references, but at the moment the NuGet Package Manage doesn’t filter available packages by the runtime. So it will let you try to add a standard .NET package to a .NET Core project, and then give you an error like this:

 

NU1002: The dependency CommonServiceLocator 1.3.0 in project Sixeyed.Uptimer does not support framework DNXCore,Version=v5.0.

 

The full Visual Studio is still my preferred IDE for .NET Core apps though. The alternatives are attractively lightweight, but they need a few more releases to be stable enough for real development work.

 

Visual Studio 2015 does a good job of making .NET Core development feel like ordinary .NET development. It’s all familiar, down to the properties page for the project, where you can configure how to run the project in Debug mode – with useful cross-platform features, like specifying the DNX runtime version and values for environment variables:

7

 

 

Docker and .NET in the Future

 

Windows Server 2016 is coming soon and it will support running Docker containers without an intermediate VM layer, so the sixeyed/coreclr-uptimer image will run natively on Windows.

 

The Windows equivalent of the Linux base image to run inside containers is Windows Nano Server. That technology is also modular, aimed at creating images with very small footprints (currently under 200MB – much smaller than the 1.5GB for Windows Server Core, but still much bigger than the Ubuntu base image, at 44MB). For that approach, .NET Core would still be preferred to the full framework, even if the container could run it.

 

And we’ll have to wait and see if you’ll be able to define a Docker image that starts:

FROM windowsnanoserver.

 

The Dockerfile approach doesn’t fit with how Microsoft currently favor defining images, which is using PowerShell and Desired State Configuration. Also, for Windows-based images to be publically available on the Docker Hub (or something similar), Windows Nano Server would effectively need to be free.

 

But if the Dockerfile format isn’t supported natively in Windows Server, there’s a good chance that a translator emerges from the community which takes a Dockerfile and generates a PowerShell DSC script.

 

Application containers are changing the way software is designed, built and deployed and now is a great time to start taking advantage of them with .NET projects.

 

EltonStoneman

About the author

Elton is a Software Architect, Microsoft MVP and Pluralsight Author who has been delivering successful solutions since 2000. He works predominantly with Azure and the Microsoft stack, but realises that working cross-platform creates a lot of opportunities. His latest Pluralsight courses follow that trend, covering Big Data on Azure and the basics of Ubuntu. He can be found on Twitter @EltonStoneman, and blogging at blog.sixeyed.com.

 

Today’s #FridayFives!

$
0
0

Mexico City Hackathon Internet of Things – Windows 10

$
0
0

Last month, Mexican MVPs along with members of the Microsoft DX Mexico team presented the Hackathon Internet of Things – Windows 10 in the most well-known meetup venue for Mexico City’s IT community: Telmex Hub. For 36 hours, students, developers and IT pros shared code, participated in technical sessions, took on developer challenges and had lots of fun.

1

The technical sessions were delivered by Visual Studio and Development Technologies MVP José Saturnino Pimentel Juarez and Windows Development MVP Ricardo Pons and included topics like learning to build Adaptable User Interfaces for devices running Windows 10, program Live Tiles, and integrate Windows Azure services.

Attendees worked in teams of three to develop a Windows 10 app that uses the Universal Platform and tackled three challenges:

  • App Challenge- Develop an innovative app (something never seen before)
  • Videogame Challenge – Create a game addictive to play
  • Internet of Things Challenge – Develop an app that shows how IoT is changing the world

2

3

You can find photos of this great event here.

 

Upgrading from Lync 2013 to Skype for Business Server 2015 – What You Need to Know

$
0
0

Editor’s note: The following post was written by Office Servers and Services MVP Curtis Johnstone.

Much has been written about the SfB upgrade process. This article provides an updated summary of how best to get started, how to avoid some common issues with good tips from the field to get you upgraded from Lync Server 2013.

Getting Started

Generally the in-place upgrade requires no new hardware and is straightforward however you should familiarize yourself with the process and most importantly – pay attention to the the pre-requisites (bringing all your server components up to the appropriate patch levels).

Much has been written upgrading to Skype for Business Server 2015 from Lync Server 2013.  The best resource for upgrading is Microsoft’s Scott Stubberfield’s Ignite 2015 Session “Getting to Skype for Business: Everything You Need to Know to Upgrade from Lync” (https://myignite.microsoft.com/#/videos/66647450-2f91-e411-b87f-00155d5066d7).

In a nutshell there are 2 ways to do an in-place upgrade:  Offline (with some downtime), or Migration (with no downtime). Fellow Skype for Business (SfB) MVP Dino Caputo has a good article here describing these two modes: Moving to Skype for Business: Know Your In-Place Upgrade Options. Also see Microsoft TechNet article: Plan to upgrade to Skype for Business Server 2015.

If you choose to do the offline method, Scott Stubberfield talks about this option at the 0:20:31 min mark in the Ignite video I mention above.

Answers to Common Questions

  • Do I need to un-pair my pools before upgrading?  No, there is no need to un-pair pools before upgrading.
  • Can use Pool Failover to Upgrade?  No, do not use the Invoke-CsPoolFailover cmdlets to failover the the pool; there is no in-place upgrade support for this state.
  • Are AD changes required (schema changes)?  No. You do need to execute a Forest Prep one time, but there are no schema changes to the Forest or Domain if you are coming from Lync Server 2013. This Forest Prep steps just adds the RtcCommponentUniversalServices group to the RtcUniversalReadOnlyAdmins group. This is to avoid potential problems with permissions while failing over the CMS Management Store.
  • What order do I upgrade my servers in?  In general order:  Inside –>  Out.  That is, start with the user pools first, other internal server roles (directors, mediation servers), and then the Edge. Having said that, many have upgraded the edge first, and it has been fine.
  • What are the SQL pre-requisites?  The supported SQL version are listed on TechNet here: Server requirements for Skype for Business Server 2015. To summarize:
    • Local SQL Instances used on the Front-End Server.
      • Lync Server 2013 used SQL Express 2012
      • Skype for Business Server requires SQL Express 2014. The in-place upgrade process will take care of this if the Lync Server 2013 local database instance is running SQL Express 2012 SP1 or greater.
    • Summary of Supported SQL Versions for Skype for Business Server 2015 on the back-end:
      • Clustering supported in all versions of SfB.
      • SQL Server 2008 R2 SP2 (SE/EE), SQL 2012 SP1 (SE/EE), SQL 2014 (SE/EE) are supported.
      • Always On only supported in Enterprise Version; it is not supported for Standard.
  • Can I upgrade my Lync 2013 servers in mixed mode with Lync 2010 Servers?   No.  Upgrading in mixed mode is not supported. This graphic illustrates it well:

1

Tips

  • You will need a separate server (not running Lync Server 2013) to initially run the Skype for Business Topology Builder where you will upgrade the Lync 2013 topology, the databases, and publish the new Topology.
  • No mixed version servers inside of a pool. All Servers within the same pool need to be running the same version. So if you are upgrading a pool you need to bring down the entire pool (e.g. Lync 2013 Server), upgrade all the servers, and then bring up the pool as Skype for Business Server 2015.
  • Stop all services on all servers in the pool being upgraded.  Use the Disable-CsComputer in scorch mode instead of the Stop-CsWindowService cmdlet on each Front-End server. Disable-CsComputer prevents the services from automatically starting up by accident on a re-boot before the upgrade is complete. The in-place upgrade process will re-enable those services, but won’t start them. Can use Start-CsPool to start all services in the pool.
  • Do Obtain and Install KB 2982006KB 2982006 is a pre-requisite IIS patch for the Skype for Business Server upgrade (on Win2012 R2).  Like many hotfix it is described as ‘…intended to correct only the problem that described in this article’.  Do request and install it.  You just make a request from the KB article page, and they will send you an email with a link to download it.
  • Make sure the CMS is patched and up-to-date.  If replication of the CMS db has failed and is not up-to-date, the upgrade will fail. To check this, use the Get-CsManagementReplicationStatus cmdlet.  If it returns $true, it is up to date.
  • Pay attention to Prerequisites.  Here is a summary:
    • The upgrade installer checks for at least 32Gb free on the disk – make sure you have that
    • Install CU5+ latest hotfix to all Lync 2013 Topology Components
    • PowerShell RTM version (6.2.9200.0) or later
      • Use $PSVersionInfo to check the build number of PowerShell
    • Have at least SQL Server 2012 SP1 installed
    • You will also need one of the following:
      • KB2533623 – Windows Server 2008 R2
      • KB2858668 – Windows Server 2012
      • KB2982006 – Windows Server 2012 R2
  • Use the Start-CsPool cmdlet to Start all Servers in the Pool at the same time when you bring the servers back up.
  • Skype for Business uses new Address Book Normalization Cmdlet’s instead of the former Company Phone Number text file.
    • The Company_Phone_Number_Normalization_Rules.txt is no longer used.
    • You need to import the rules.  You can do this using this Import-CsCompanyPhoneNormalizationRules cmdlet.
    • The new Skype for Business cmdlets are Set/Get-CsAddressBookNormalizationConfiguration and Set/Get-CsAddressBookNormalizationRule
  • Skype for Business File Share Usage might be higher after the Upgrade:
    • SfB uses the File Share for the fabric logs.  In Lync Server 2013, the Front-Ends were used for this.
    • If you have many Front-Ends, you should ensure that your File Share has enough room to accommodate the Fabric Logs on each front-end.
    • Poor File Share disk performance (e.g. high I/O), can impact the length of time it takes to join a conference. You can monitor the file I/O of the file share in perfmon if you suspect this is the case.

Resources

Microsoft TechNet – Plan to upgrade to Skype for Business Server 2015

Microsoft TechNet – Server requirements for Skype for Business Server 2015

Ignite in Chicago – Getting to Skype for Business: Everything You Need to Know to Upgrade from Lync

Matt Landis – Step by Step Skype for Business Server 2015 In Place Upgrade

7 Part Series on INSTALLING SKYPE FOR BUSINESS SERVER 2015 – http://itprocentral.com/series/installing-skype-for-business-2015/

Upgrading Lync to Skype for Business – http://pei.com/2015/04/upgrading-lync-to-skype-for-business/

2

About the author

Curtis has more than 15 years of enterprise application development experience and currently focuses on Unified Communication research with Microsoft Skype for Business, Lync, Office 365, and Microsoft Exchange. His previous Exchange experience involves the development of enterprise monitoring and diagnostic solutions for Microsoft Exchange, and participation in the design, deployment and administration of corporate Skype for Business and Exchange deployments. Curtis is a senior product architect in the Microsoft Windows Management division at Dell Software.


Your Happy Holiday #FridayFives!

European SharePoint Conference 2015

$
0
0

081015_2037_Imspeakinga1

Europe’s largest and most impactful SharePoint & Office 365 conference, European SharePoint Conference 2015 (ESPC 15), took place from November 9th to November 12th in Stockholm, Sweden. ESPC provides the opportunity for 1500 SharePoint professionals from more than 50 countries to access in-depth technical and business content as well as to meet, network and socialize.

‎Over 100 sessions, covering seven technical tracks, and exceptional keynotes were delivered by members of Microsoft’s team and industry experts–including 44 MVPs from all over Europe, Canada, USA, and Singapore. The event kicked off with a headline speech by Microsoft’s corporate vice president of OneDrive and SharePoint, Jeff Teper, into the future roadmaps of these important technologies. In addition, senior director of product management for Microsoft SharePoint and OneDrive, Seth Patton, senior product marketing manager for SharePoint and Office 365 Bill Baer, IT unity co-founder Dan Holme, Office Servers and Services MVP Geoff Evelyn and vice president  of Workflow Technology at Nintex Mike Fitzmaurice were the event’s keynote speakers.

Sessions included 40+ ground breaking sessions for developers, 30+ sessions for ITPros and 30+ sessions for decision makers/end users. The numerous networking sessions, announcements and sideline sessions included a Women in Technology Lunch, Microsoft Design Team focus groups, and an invitation-only CIO Summit. Just as in previous years, ESPC organized a Gala Awards night which has been called simply as ‘’legendary.’’ This year’s event took place in the stunning Stockholm City Hall, home of the Nobel Peace Prize. It included the ESPC SharePoint Community Awards, Pioneering Award for Startup Excellence and the announcement of Europe’s Top Office 365 influencers.

 

 

#FridayFives to Kick Off the New Year!

$
0
0

Join us in congratulating MVP awardees!

$
0
0

woman MVP

We’re kicking off 2016 with 1,074 new and renewing Microsoft Most Valuable Professionals–MVPs!

These are exceptional community leaders who have demonstrated their deep commitment to helping others make the most of their technology, voluntarily sharing their passion and real-world knowledge of Microsoft products with the community. If you’re involved with technical communities, the chances are good that one of them has helped you. Please join us in saying—congratulations!

This is the first quarter that incoming MVPs have been awarded in Microsoft’s new categories, designed to recognize the way today’s technologists often design, build and innovate across multiple platforms, devices and technologies. (You can find out more about the new MVP Award categories here.)

This is also the first quarter that MVPs have had the opportunity to be recognized for their contributions across multiple award categories, and 31 MVPs have achieved dual award recognitions! These range from awards in Visual Studio and Development Technologies as well as Windows Development to awards in Enterprise Mobility and Windows and Devices for IT.

Each year, Microsoft honors around 4,000 MVPs. They are nominated by Microsoft, other community individuals, or in some cases themselves. Candidates are rigorously evaluated for their technical expertise, community leadership, and voluntary community contributions for the previous year. They reflect Microsoft’s global community, today coming from more than 90 countries and speaking more than 40 different languages—reaching around a million Microsoft customers each day!

MVPs are recognized each quarter for this annual award, which continues to grow and evolve to reflect the development of Microsoft technologies. Congratulations to the new MVPs, and welcome back to renewed MVPs. We are very excited to recognize your amazing accomplishments!

And if you know (or are!) an awesome community leader, go here to make a nomination.

Freeing your Azure data with F# Type Providers

$
0
0

Editor’s note: The following post was written by Visual Studio and Development Technologies MVP Isaac Abraham as part of our Technical Tuesday series with support from his technical editor, Visual Studio and Development Technologies MVP Steffen Forkmann.

F# is a mature, open source, cross-platform, functional-first programming language. It empowers users and organizations to tackle complex computing problems with simple, maintainable and robust code. In this post, I want to discuss how we can use F# to reduce the friction and barrier to entry to dealing with cloud storage within .NET compared to conventional mechanisms that you might be used to.

F#, Type Providers and the Cloud

One of the features that I love showing people new to F# are Type Providers, not only because they are fantastically powerful, but also because they’re just plain awesome to demo! An F# type provider is a component that provides types, properties, and methods for use in your program without you needed to manually author and maintain these types. As we start to deal with more and more of these disparate – and distant – data sources – it’s crucial that we make accessing such systems as painless as possible. Azure Storage is one such system. It’s cheap, readily available and quickly scalable. Blobs and Tables are two elements of Azure Storage that we’ll review in this article.

Working with Blobs

When working with blobs in .NET, we normally use the .NET Azure SDK, which allows us to interrogate our Storage assets relatively easily. Here’s a C# snippet that shows how we might interrogate a container and a blob that has a well-known path: –

1.1

 

 

Of course, we’re having to use magic strings here. There’s no compile-time safety to ensure that the container or blob actually exists. Indeed, we can’t actually validate this until we run our application and reach this section of code, unless we resort to unit tests or perhaps copy and paste our code into LINQPad or similar.

The F# Azure Storage Type Provider solves all these problems in one go, by generating a strongly-typed object model at edit and compile time that matches the contents of your blob storage. Here’s how we would achieve the same code as above in F#: –

2.1

 

In two lines we can achieve the same thing in a completely strongly-typed manner. You won’t need to write a console test runner either – you can simply open an F# script file and start exploring your storage assets. We can’t mistype the name of a container because we get full IntelliSense as we “dot into” each level of blobs: –

3.1

 

 

 

And because this is statically typed, and checked at compile time, if the blob is removed from your container, your code will not even compile. Of course, if you do need to fall back to weak-typing for e.g. dynamically generated blobs etc., you can easily fall back to the standard SDK directly from within the Type Provider (as seen from the AsCloudBlobContainer() method above).

Working with large data sets

In the example above, we’re downloading the entire contents of the blob to our application. When working with large files in blob storage, this might be a problem, so the type provider allows us to treat text files as streams of lines: –

4.1

 

 

Here we’re lazily streaming a potentially large file, and reading just up until we find the first 10 lines that contain the word “alice” – we don’t have to download the entire file, and we are using standard sequence query functionality such as filter and take (you can think of Seq as equivalent to LINQ’s IEnumerable extension methods).

Working with Tables

Tables are another Storage component that is simple and relatively easy to reason about. Cheap and lightweight, it’s a good way to start storing and querying tabular data in Azure. The trade-off is that it contains relatively few computational features e.g. Tables do not allow relationship or aggregations. Here’s how we might query a table structure that looks like this: –

5.1

 

 

The need for stronger typing

If we wish to query this using the standard SDK, we’ll need to manually create a POCO that implements ITableEntity, or inherits from TableEntity, and have properties that match the table type (again, we’ll only know if this is correct at runtime). Then, we need to create a query. The Azure SDK is somewhat inconsistent here in that you can create queries in several ways, and none of them are particularly satisfactory.

Firstly, we can use the weakly-typed TableQuery builder class to manually build the Azure Table query string – this offers us little or no compile-time safety whatsoever. Alternatively, we can use the TableQuery<T> query builder. Unfortunately, this API is somewhat awkward to use in that you can create it in two different ways – and depending on how you construct it, certain methods on the class must not be called. Failing to adhere to this will lead to runtime exceptions: –

6.1

There’s also an IQueryable implementation for tables. This suffers from the fact that you can generate compile-time safe queries that will fail at runtime as Azure Tables offer an extremely limited query feature set, so it’s extremely using to write a query that compiles, but at runtime will result in an exception: –

7.1

 

 

Smarter and quicker Tables with F#

Again, it’s the F# Type Provider to the rescue. Firstly, we don’t need to worry about the hassle of navigating to a specific table, nor about manually building a POCO to handle the incoming data – the TP will create all this for you based on the schema that is inferred from the EDM metadata on the table, so we can immediately get up and running with any table we already have: –

81.

 

 

This will output to the F# REPL within Visual Studio the following: –

9.1

 

 

We also have access to a strongly typed Azure Table query DSL that is statically generated based on the schema of the table. This is guaranteed to only generated queries that are supported by the Azure Table runtime, yet also gives us an IQueryable-like flexibility: –

10.1

Notice that query methods for each field are typed for that field – Cost takes in floats; Team takes in strings etc. etc. so there’s no chance of supplying data of the incorrect type at runtime.

Conclusion

Using the Storage Type Provider allows us to point to any Azure Storage account that you might already have and start working with your data in less than a minute and change the way we start interacting with our Azure Storage assets.

Download the Azure Storage Type Provider via NuGet, create a F# script file, provide your connection string, and then just use Visual Studio (or Visual Studio Code) to immediately start navigating through your storage assets.

There’s no need to leave your IDE to an external tool – you can continue with your standard workflow, using an F# script to explore your data. When you’re happy with your code, you can easily move this into a full F# assembly which can be called from C# as part of your existing solution.

More than just using a REPL and a script though, the combination of F# and the Storage Type Provider gives us an unparalleled experience through a stronger type system that lets us be more productive and confident when working with Azure cloud assets.

ia

 

About the author

Isaac is an F# MVP and a .NET developer since .NET 1.0 with an interest in cloud computing and distributed data problems. He nowadays lives in both the UK and Germany, and is the director of Compositional IT. He specializes in consultancy, training and development, helping customers adopt high-quality, functional-first solutions on the .NET platform. You can follow his blog here.

 

Viewing all 788 articles
Browse latest View live


Latest Images