Quantcast
Channel: The Microsoft MVP Award Program Blog
Viewing all 788 articles
Browse latest View live

And now, the Friday Fives!

$
0
0

Automating the collection of CQM data

$
0
0

Editor’s note: The following post was written by Office Servers and Services MVP Lasse Nordvik Wedø as part of our Technical Tuesday series.


Installing and deploying Skype for Business (formerly known as Lync) is one thing, but knowing everything is ok and in order is something entirely different. Many customers believe not every bad or poor call is submitted to helpdesk, and this leaves them with “this feeling” something is not right and does not work.


There is no need for this fear or feeling. If the deployment includes a monitor server (and every deployment should), all the Call Detail Records (CDRs) and Quality of Experience (QoE) data is right there, at you fingertips. All you have to do is to take a look at it. Unfortunately, many find the Monitor reports slightly difficult and slow to navigate, and also it can be hard to know exactly what you are looking for.

That’s where the Call Quality Methodology (CQM) comes in. It is a very powerful way of getting exact reports on what’s good, what’s bad and how your deployment is trending. It’s a small script, querying the Monitor Database and dumping a lot of metrics into text files. Using the spreadsheet that comes along with it, you can then get graphs and tables describing the health of your deployment.


I have shown this tool to many customers, and they all fall in love with it. Their only problem is, they forget to run it on a regular basis, which is needed if you want to keep monitoring your deployment and take action on possible arising issues.
That is why I have created a script which can be used to automate parts of the process, and send the collected CQM data in a zipped file to a desired mailbox. This is how you can use my script in conjunction with the CQM tool to create a monthly report on your deployment.


Prerequisites:
• Download and extract the “Microsoft Call Quality Methodology (CQM) Scorecard” package to the server from where the reporting script is run. The current release is available at TechNet: https://technet.microsoft.com/en-us/library/mt126216.aspx
• Download the Get-CqmAutomated.ps1 script from the TechNet Gallery to the server from where the reporting script is run: https://gallery.technet.microsoft.com/lync/Get-CqmAutomated-807c2095
• A valid user account which can run the script and has access to the Monitor Database. (Read the CQM for database access details.) Local admin rights on the server where it’s run.
• A valid smtp server, sender and receiver email address
• A copy of the Scorecard on the computer where analysis will take place
(figure 1: file downloaded to c:\cqm) 
 

When run, the Get-CqmAutomated.ps1 script will identify the monitor database (presuming there only is one database), it will detect your database’s date format and launch CQM.PS1 from the specified folder (passing all required parameters).
For automation purposes, I automatically set the current date as the CQM “EndDate” and let the script calculate the “StartDate” based on parameter input with switches for 30, 60, 90 or 180 days. 30 days will be the default selection if there is no input. Given necessary email parameters, it will send a zipped file with the content to the designated address (be mindful of the size of log files, and possible mail server restrictions).


Use Scheduled Tasks to launch these scripts on a regular basis. One way to configure this is through powershell. And this is how to set the task up to run on a Sunday every 4 weeks
Create the task (edit path and mail parameters):


$task = New-ScheduledTaskAction -Execute 'Powershell.exe' `
-Argument '-NoProfile -WindowStyle Hidden -command "& {c:\cqm\Get-CqmAutomated.ps1 `
-SourceFolder C:\CQM -SendEmailOnComplete -EmailFrom "someone@somewhere.com" `
-EmailTo "someoneelse@somewhere.com" -SMTPServer "10.0.0.42"}"'

Create the trigger:
$trigger =  New-ScheduledTaskTrigger -Weekly -WeeksInterval 4 -at 11pm -DaysOfWeek Sunday 

Register the scheduled task:
Register-ScheduledTask -Action $task -Trigger $trigger -TaskName "Get-CQMAutomation" `
 -Description "Monthly report from Monitor Database" -User "domain\user" -Password "enterpassword" -RunLevel Highest


$task = New-ScheduledTaskAction -Execute 'Powershell.exe' `
-Argument '-NoProfile -WindowStyle Hidden -command "& {c:\cqm\Get-CqmAutomated.ps1 `
-SourceFolder C:\CQM -SendEmailOnComplete -EmailFrom "someone@somewhere.com" `
-EmailTo "someoneelse@somewhere.com" -SMTPServer "10.0.0.42"}"'

Create the trigger:
$trigger =  New-ScheduledTaskTrigger -Weekly -WeeksInterval 4 -at 11pm -DaysOfWeek Sunday 

Register the scheduled task:
Register-ScheduledTask -Action $task -Trigger $trigger -TaskName "Get-CQMAutomation" `
 -Description "Monthly report from Monitor Database" -User "domain\user" -Password "enterpassword" -RunLevel Highest

Once these steps have been completed, we should be able to see the task in the regular “Scheduled tasks”. Please note the task is set to run with highest privileges and whether a user is logged on or not.
(figure 2, scheduled tasks)


 
If everything is set up right, you should be receiving an email when the script is run. The email contains a zipped file with all the collected data, and information about the date format.
The email should look something like this:
(figure 3 email) 


 
The next step is to extract the files into a folder on your local hard drive, follow the instructions 1-11 and study the results. Please not the default separator selected in step “1” is “,” in most systems. This may not be true for all deployments, and the files should be inspected before actual import.
The value for “Date Format” in step 2 is printed in the email (discovered by the script). The rest is up to individual preferences.
(figure 4 scorecard start) 


 
There is a lot of good information on CQM out there, and one of the best places to look for information on how to use this great tool is at Jens Trier Rasmussen’s blogs at TechNet: http://blogs.technet.com/b/jenstr/ Jens Trier Rasmussen is one of the contributors to the tool. Another great place to find further information is at my fellow MVP Ståle Hansen’s blog. Here you can find links to posters, current releases and other blogger’s pages on CQM: http://msunified.net/tag/call-quality-methodology/
In addition, I will repost this post and any updates to the script on my own blog, under the following tag: http://uc.lawedo.net/search/label/CQM


I hope you all find this tool and post helpful, and I really encourage all Skype for Business administrators to use the CQM on a regular basis. It is a systematical approach to monitor and maintain service quality over time.

About the author
Lasse is chief architect of BusinessCloud Microsoft UC Solutions at Datametrix AS, as well as a conference speaker, blogger, usergroup co-founder and hobby photographer in Norway.

 

 

24-hour online conference Collab365 was a blast!

$
0
0

The first of many planned Collab365 events, COLLAB365 Global Conference, a virtual conference developed and maintained by the community for the community, kicked off at 9am NZST and followed the sun for 24 hours.  The community expanded the vision for the event to include not only Office365 but also SharePoint and Azure and garnered the collaboration of more than 120 speakers across 6 continents, including 62 MVPs, 6 MCMs, and Microsoft product managers. Office Apps senior product marketing manager Jeremy Thake and Office 365 senior product manager at Mark Kashman presented keynotes.


The event consisted of three language tracks, managed by:


• Jon Manderville, Fraser Beadle, Hugo Esperanca, Andy Talbot, Hugh Wood and Mark Jones for the English track.
• Office Servers and Services MVPs Gokan Ozcifci, Patrick Guimonet, and Sébastien Levert for the French track and Alberto Diaz Martin and Juan Carlos González Martín for the Spanish track

.
Altogether, Collab365 drew around 5,000 attendees and an estimated online audience of 680,185. The on-demand video is available here.

 

Launch your weekend with the Friday Fives!

$
0
0

Ready, Set, Summit!

$
0
0

The 2015 MVP Global Summit is just one week away! Banners welcoming MVPs from around the world are flying all over Microsoft’s world headquarters in Redmond. Here are some of our favorite tips for getting the most out of what Microsoft Azure MVP Magnus Mårtensson called, “a huge honey pot of knowledge and experience—a continuous fire-hose-draught of learning.” (Thanks to all the MVPs who've contributed ideas over the years!)


Summit is a well-oiled machine with great transportation, receptions, meals and sessions. Just follow the flow and let the organizers and MVP old-timers lead the way. It is hard work to take in so much great content in such a short time. Do take time to get to know the teams at Microsoft better and become friends with the folks who build your products! For the rest of the year you will be interacting with these connections in your work. Also, network, network, network! Hang out and chat, strike up a conversation in the halls during the day and in the bars at night. First question is always “What’s your expertise?” All of us experts come together and we really thrive on learning and sharing.
--Magnus


If you are local or are flying in early, come join us for a special community volunteer event which will be held the afternoon of Saturday, 31 October, followed by a dinner later that evening. We will be working with Hopelink for a harvest glean that will benefit local food banks Sign up at Seattle GiveCamp at MVP Summit 2015.
--Visual Studio and Development Technologies MVP Paul Litwin


While Bellevue has nice shopping and restaurants near the hotels (including a Microsoft store), if you have a chance to go into downtown Seattle, take it! It is a beautiful, walkable city. Pike Place is a great place to spend a couple of hours or an afternoon. Also ,I know everyone has heard this a million times, but here it is again: Don’t break the NDA. Microsoft trusts us with their “secrets” and breaking that trust hurts all of us. Twitter will survive if you don’t tweet everything that happens at the Summit. The blogging world won’t end if you don’t tell everything you know.
-- Outlook MVP Diane Poremsky


At the airport, look for people with MVP logos on their coats, bags, scarfs, hats, canes. They probably have attended a Summit before. Say hi and make a friend. You may be able to share a cab to Bellevue.There are two Microsoft stores you can go to during the Summit. The first one you will come across is the one in Bellevue Square and the second will be on campus next to where the buses will drop you off. The campus store has employee prices, but limited selection. Also, go down for breakfast early and take your time talking to the people at your table.
-- Visio MVP John Marshall

Get Comfortable – Find a seat, take off your jacket, grab a drink and relax a little.  It’s a busy week with lots of great content, speakers and feedback sessions.  If you are new to the group, there are a few ways to break the ice and one of them is taking off your coat.  Nothing says I’m about ready to leave than someone wearing a jacket, of course the swanky MVP jackets are not included in this ;)  If you are not new to the group make sure you do your best to make others feel welcome.  A former colleague used the term “rock piles” to describe a group of people huddling and it’s not overly welcoming.   Open up that circle and make new friends while welcoming your old friends

Outside the Box – While we are all deep subject matter experts in an area and MVP Summit is all about gathering those groups together, find some time to attend a session that is outside of your area of expertise.  While most sessions are targeted there are opportunities to sit in on some other sessions and learn a bit about something you may not work with daily but that you are interested in learning more about.


Go Offline – MVP Summit allows you to connect with other MVPs and the PGs for a short while, face to face, once a year.  OK twice this year but still.  Close the laptop lid, stop hiding behind a screen.  There isn’t a whole lot of anything you can tweet, blog, or post to Facebook anyway.  Emails can wait until the next break, or the evening.  If the email is incredibly urgent quietly take your device into the hallway to respond.  As a speaker, and an attendee, the last thing I want to hear is you pecking away at your keyboard.


NDA – Most importantly respect the NDA.  "If it happens at Summit it stays at Summit” is a clichéd approach but the best one.  It allows MS to continue hosting the MVPs and sharing information as well as ensures you won’t be booted from the program :)
--Cloud and Datacenter Management MVP Rodney Buike

Continuous Integration with Ionic Framework using Visual Studio Online – From Check In to Device

$
0
0

Editor’s note: The following post was written by Visual Studio and Development Technologies MVP Mike Douglas as part of our Technical Tuesday series with support from his technical editor, Visual Studio and Development Technologies MVP Jeff Bramwell.

Sample code used in this example can be found in the following GitHub repository: https://github.com/mikedouglasdev/ionic_taco_vsobuild_sample

Creating the Ionic Tabs Starter App
Building mobile apps?  How would you like to build a mobile app, check in code, and within minutes be able to verify the functionality of it on any of your devices?  This article will walk you through setting up an Ionic project within Visual Studio 2015, checking it into Visual Studio Online, kick off a build using the new build system, and upload the app to Ionic View.  This will allow you to view and test your application within minutes.  You will see how Visual Studio 2015 Tools for Apache Cordova provides a rich development environment for building hybrid mobile apps that take advantage of your existing web skills.   The Ionic framework is not only an AngularJS based framework that is built on top of Cordova, it provides a feature-rich ecosystem for building applications including one of my favorite features called Ionic View.  Ionic View is a mobile app that can run your application from your devices without deploying to the store.  

 
To get started, you will need Visual Studio 2015 installed with the Tools for Apache Cordova configured.  You can configure this locally or create an Azure VM running Visual Studio 2015 with Cordova Tools.  Having the Cordova tools configured provides most of the tools you need.  You will need Chrome to run the app in the Ripple emulator.  The tools can also be tricky to manually configure.  Next you need to install Ionic.  You will get used to command line interfaces (CLI) when working with Cordova and Ionic.  To install Ionic, use the developer command prompt and run the following command. 


npm install ionic -g 

The -g tells npm, or the Node Package Manager tool, to install it globally so it isn’t just within the scope of the project.


There are several ways to create an Ionic project including the CLI and through Visual Studio.  Ionic has extensive documentation on using their CLI to create project.  Let’s create the project using Visual Studio.  Within Visual Studio 2015, open Tools > Extensions and Updates.  Select Online and Search for Ionic and install the Ionic Tabs Template


 
Next, create a new project using the template located under Other Languages > JavaScript > Apache Cordova Apps
 

At this point you can run the app locally in the Ripple emulator.  You may receive a popup notification to allow the traffic through your firewall.  Allow this to occur if prompted.  However, there are few additional files we will need to add to build our application.  Let’s review these.


Gulp
Gulp is a popular and powerful build system based on streams that we are going to use to perform the build, packaging, and eventually uploading to Ionic.  It is outside the scope of this article to present a full background on Gulp.  I’ll cover the important parts for our scenario.  Instead of starting from scratch, let’s start with a sample Gulp file that showcases the primary features we need for this.  Gulp itself is very simple but there is extensive plugin support to handle virtually every need.   Microsoft has wrapped a lot of the Tools for Cordova complexities into a Gulp plugin.  The following provides a sample of the script to help you see how it is used to build projects on OSX and Windows platforms. 
https://github.com/Chuxel/taco-team-build/tree/master/samples/gulp  
 
The package.json is also important.  This file includes the plugins for gulp to use.  Add these two files to the root of your project as shown in this image. 
 

The last thing we need to do before checking the solution into the repository is to add the ionic. Project file.  This file specifies the name of the app and appid used during the Ionic upload process to make the app available through Ionic View.  To create this, you first need an Ionic account.  Go to http://ionic.io/ and create an account.  Open a command prompt and navigate your project’s folder.  Run the following commands:  ionic login (using the account you just created) and ionic upload.  The second command will make your app available on any IOS or Android device through the Ionic View app.


Continuous Integration from check-in to device
Now, we want the latest version of our app available with each check in.  Let’s configure our application for continuous integration so the ionic upload process happens with each check in.  VSO includes the new cross platform build system where we can use the hosted build agent to build and upload our app.  Each VSO account includes 240 free build minutes each month and is cost effective for going over this amount.


First we want to commit and push it into source control.  Create a free Visual Studio Online account if you don’t already have one and create a Team Project using Git for source control.  The cross platform build agent at this time only works with Git.  Clone your repository and add your solution to the repository.


Once your solution is committed and pushed into the repository, navigate to your VSO account and go the BUILD hub.  The build system in TFS 2015 and VSO allows you to create and manage your builds directly from the web.  To create a new build, click on the green plus in the top left and choose Empty because we are not using Visual Studio to do our build, we are going to use gulp.


To build the application using gulp, add the following build steps.  The npm install step is required before gulp can be used.  You don’t need to pass any arguments but do set the Working Directory on each of these to your project’s relative path as shown below.  The second npm install step is used to install ionic on the hosted build agent. Pass ionic –g as arguments to this step.  The last step is to pass the gulp file to the Gulp task and again provide the relative path to the file to look something like this, MobileApp/MobileApp/gulpfile.js. 
 
At this point, you can save your build and click on Queue build to test your build process. When it completes, this should show a passing build.  Once we enable continuous integration from the Triggers tab, we have our app building and creating the app packages on the server but we are still missing two key things.  I might want to take the build and install it manually on one of my devices and we still need the build to upload our app to ionic. 


To make the packages available once a build completes, add a Publish Build Artifacts step to the build.  The gulp file organizes all of the packages in the bin folder.  Use the variable $(Build.SolutionDirectory) as the root and contents to include everything under the bin folder as shown. 
 

To enable our application to upload the app, we first want to store our user name and password as variables.  On the Variables tab you can define variables you want to use throughout the build.  Variables can also be stored securely so other users can’t see the password and it doesn’t have to be stored in source control.  For regular variables, these will be available to our gulp script as an environment variable.  For the secret variable, we will have to pass this into our gulp file and read it in using a gulp plugin. 
 
Update the gulp task settings to explicitly call the default task and pass in the password. 
 
Back in project, we need to add some additional functionality to our gulp script.  We need to add two plugins, gulp-run and yargs.  Gulp-run will allow us to run the command line tool for ionic upload and yargs will allow us to retrieve the password argument passed in.  Add these two to you package.json as shown. 
 
Finally update the gulp file to include the gulp-run and yargs plugins.  Create a new gulp task, name it default, rename the previous default task to something like copyRelease and specify it as a dependency for the new default.  Within the new gulp task, assign the email value from the environment variable and the password from the argument. 
 
Commit and Push those changes to the repository.  If you selected CI as the build trigger, it should be queued automatically otherwise queue it manually.  When it completes, you should see success! 
 
In the build log, under the gulp default –password $(IONIC_PASSWORD) step you can see that the application was successfully uploaded. 
 
To download the packages, you can click on the Artifacts link to either Download or Explore online. 
 
To view the app from a device, download the Ionic View app from the App Store / Google Play Store.  Log in using the same account as above and you will see your app.  Click on it and download the latest to run your app as if it was deployed to the store. 
 
You now have a cross-platform hybrid mobile application that can be tested on iOS and Android devices by simply checking in the Ionic framework based application using Visual Studio 2015 Tools for Cordova and the new build system in Visual Studio Online.


I hope this will help you get started with TACO and Ionic.  There are so many more things you can do.  With Ionic you can set up distribution lists to view your app and so much more.   When you create that million dollar idea, don’t forget about me  :)

Additional Resources


Comprehensive guide to Continuous Integration with Cordova apps https://github.com/Microsoft/cordova-docs/blob/master/articles/tutorial-team-build/General.md


Use the Visual Studio Tools for Apache Cordova with Visual Studio Online or Team Foundation Services 2015
http://taco.visualstudio.com/en-us/docs/tfs2015/

About the author

Mike is a Solution Consultant at Deliveron Consulting Services.  He specializes in working with development teams to implement Application Lifecycle Management (ALM) solutions to eliminate traditional silos between development, testing, and project management to establish cohesive processes with the Visual Studio ALM tools. Follow him @mikedouglasdev.

 

Here they are—your Friday Fives!

$
0
0

Welcome MVPs to the 2015 MVP Global Summit!

$
0
0

All the deep dive technical sessions (around 600 of them!), rich two-way conversations and excitement of seeing friends and colleagues you haven't seen for a year starts right now at the MVP Global Summit. This year it's bigger than ever. Nearly early half of the 4,000 MVPs from around the world arrived in Bellevue, Washington this weekend for what one MVP described as "like a family reunion, except you like everyone!" Steve Guggenheimer (Guggs), corporate vice president of Microsoft's Developer eXperience & Evangelism (DX) group welcomed them in his blog this morning.


Rethinking SharePoint Project Cost Estimation with Azure Machine Learning

$
0
0

Editor’s note: The following post was written by Office Servers and Services MVP John Timney as part of our Technical Tuesday series with support from his technical editor, Office Servers and Services MVP Mark Stokes

Costing SharePoint projects, or any projects for that matter can be really difficult, but what if you could simplify that dramatically by looking at data science and Azure?  


Bring on Azure Machine Learning (AML) and you suddenly have an opportunity to amalgamate your legacy cost information to better predict future outlay for pretty much anything.  What makes it even better is that it’s not really very hard to start using AML and then to move on to looking at alternative strategies for other types of predictive outcomes, and you don’t have to be an experienced data scientist to dip your toe in the water and get real fiscal potential from drag and drop solutions in Azure.


In this article, we will look at what Machine Learning (ML) is and focus on just enough to get using AML.  We’ll briefly show you how to take a dataset of legacy SharePoint project costs and then use AML to predict what future project costs would be without any calculations, and it is remarkably accurate.


What is Machine Learning?
Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. It focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data without having to recode algorithms, or in some cases to even have to build them at all. 

 
As an example: The Stanford Artificial Intelligence Lab built some algorithms that learned to recognize a cat from sampling millions of (cute) cat images on the internet without having any prior concept of a “cat.”  I’m not sure what a “cute” cat is specifically as opposed to an ugly one but I expect all cats have similar types of facial properties.  The challenge is learning how to craft code to tell us what a cats features might be, when faced with random images.


So ML is the ability to learn from data without writing statements that then force the outcome. As the data changes, so does the learning, and typically the more data the more accurate the learning as more comparisons and evaluations can be carried out by the learning engine.


The example above literally said “here are lots of cat images, learn what features imply a cat so we can compare if a zebra is a cat”.  Now, think – how would you even begin to code this?  I’m not a coder, but determining Cat from Zebra likely implies lots of “If” statements, or “case” statements – or equally complex code evaluations and pixel comparisons involving whiskers, tails and black and white stripes.


The question with ML though is really one of possibility!  What would you like to do that you can't currently do?  What do you do now that you would like to do better?  It allows you to reshape “What problem am I trying to solve” to “what outcome am I seeking to predict” – “Can I predict the cost of a SharePoint engagement”,  because I know what SharePoint engagements characteristics look like”?  Can I in fact recognise cat without coding for cat?

Why Learn about Machine learning?
The question then moves to Why Learn about Machine learning and there are some general reasons to consider this: 
• Learn it when you can’t easily code it (e.g. for speech recognition) 
• Learn it when you can’t easily scale it (e.g. recommendations, Spam detection) 
• Learn it when you have to easily adapt/personalize data (e.g. predictive typing)
• Learn it when you can’t easily track it (e.g. robot control)


Never mind detecting pictures of cats, what about child protection and detecting pictures of vulnerable children, how about better and more accurate speech recognition to help in language translation or the fight against terrorism.  SPAM is growing at an exponential pace, and we can’t just continue to develop more pattern matches and bigger SPAM engines that fail, so we “learn” what SPAM looks like and predict phishing patterns.  


New technology is also already embracing this: Office Delve (https://products.office.com/enus/business/explore-office-delve) uses predictive learning via Office Graph to determine things related to what you are seeking in a search, so likes and personalisation, related things and stuff people you are connected to like that you would also like – it learns not only what you are searching for but stuff about you to add a personal dimension to the results.  It learns from your interactions, without needing to be recoded and improves itself as data grows.


What about AI based gaming – we can’t code what you might do next, but we can predict based on known patterns of how people have played and therefore what they might do next and the game can respond and get better as you play – more aggressive and realistic competition.  What if we coupled Hololens (http://www.microsoft.com/microsoft-hololens/en-us), with Machine Learning and speech translation to visually interpret actual sign language to speech for non-sign language capable users to interact better with deaf users across the internet, in a different language?


What about handwriting, how does the post office manage with so many handwritten letters?  The obvious challenge in text automation is enabling computers to interpret the endless variation in handwriting, given we are all unique.
Let’s take a simple letter F: 


 
How could we analyse the recognition of F? I have no rule in my head that tells me what an F looks like, that’s a learned skill! But I recognise F as I was taught my alphabet as a child.
F simply looks like F!  In English anyway, in Arabic it’s an entirely confusing affair for me as we can see here from this snap.  I don’t recognise Arabic F, there are four possibilities to basic shape to make it worse, but it also has characteristics that tend to repeat, a loop, a curl a dot! 


 
Well, I suppose then if it looks like an F, if it is shaped like an F, if it appears like an F in a word full of letters and the other letters also predicted make up a recognised word, it’s a fair probability that it is an F, its likely more right than it is wrong.
But how many variations are there to F?  We would all write that differently, although like the cats face, it’s full of characteristics. 


If we can get a computer to make that same prediction using some form of process, that’s very useful, and in fact very valuable to the POST OFFICE and other business dealing with flowing text. Industry has trained computers to accurately read human handwriting.  Today, with the help of machine learning, over 98% of all mail is successfully processed by machines.  They know and recognise F in its many forms, even in Arabic.


So there is clearly value to the modern I.T Professional understanding about Machine Learning, it helps to solve problem we didn’t know we actually had until we thought how we might want to do something better, or specifically with less development time and thus less effort cost.


A lesson in History
Few of us have much experience in AI and Expert Systems, as we are not data scientists.  I spent quite some time working on Expert Systems creating financial forecasting predictions for UK government in the 1990’s, so I almost qualify - but while few I.T Pro are data scientist, a lot has happened since then that makes it a bit easier to use this tech – perhaps you don’t need to be a Data Scientist to do data science!


The ability to actually perform SPAM detection around 1997 was a bit of a milestone event – but it is reaching its limits now and we need a new approach.  Can AML better detect spam by learning about SPAM in a new way? 


 
Figure 1 http://download.microsoft.com/documents/hk/technet/techdays2015/Day2/DBI216


Look at how far we have really come already though!  By 2008 we can make traffic predictions; by 2010 we can track unplanned user movement in the XBOX; by 2012 we can do Speech to Speech language conversion in REAL TIME, soon to appear in SKYPE.


In 2014 Microsoft launched Azure Machine Learning (AML) – and now I don’t need to invest millions to build similar types of predictive solutions to those so heavily invested in before this.  AML is in fact just a hosted cloud service, and a very, very powerful one at that!


Initially, machine learning and Expert Systems generally were very expensive to setup and were a real barrier to entry.  Also, data has historically been siloed in industry.  That very data we need to do the learning and our view of data silos in organisations meant we needed industry itself to change the way we thought about data stores, data in the CLOUD is now acceptable – generally at least.


Tools to do ML were historically rather typically cumbersome and expensive, or we spent years of effort in developing solutions that were constrained and bound to tight rules and complexity.  Data Science was just that – the realm of scientists. 


Even when we made the investment to build something to work like ML, it was really constrained by our data centres ability to host or scale, and now even those rules themselves have evolved and changed with Azure hosting and storage.


Thus, history has then led us to a very interesting point, as the Gartner Hype Cycle for 2015 has Machine Learning as something that seriously needs attention by business, as it will mature in 2 to 5 years.   


Figure 2 http://www.gartner.com/newsroom/id/3114217


Early adoption is going to be key for I.T. Pros to get familiar with this new Azure cloud service and make their mark, and the possibilities are really quite endless.  Which brings us nicely to one possibility – can we predict the cost of SharePoint projects with AML, and more specifically can we do it without writing code or cranking open excel and working out a lot of IF then formulae?


Building a SharePoint Project Cost Engine
Using AML is really quite easy for beginners, but when you open it all up, it looks a bit daunting.

 

The working environment consists of the AML portal in Azure – the bit that holds all the backend services; ML Studio – a drag and drop interface for building experiments like this and the API service, so we can interact with it remotely and securely.


That technology stack is supported by people, the Azure Ops team who maintain the backend services, data scientists (or at least temporary ones like you and I) that make the experiments and developers, who like to get whizzy with AML through the API.


We will focus on Portal, ML Studio and on being a data scientist and leave the rest aside.  It’s not really necessary for our experiment.  If you are a developer or a solution architect then you might want to investigate the API side once you’ve got your head around this.

 
We start by logging into ML Studio, in the Azure portal.  You can use a trial account for this, so I’ll assume you have access and can work your way to the below URL and log in: 
https://studio.azureml.net/


ML Studio looks like that shown below, and these are experiments I have active currently in my ML studio tenancy: 
 
 
Figure 4 ML Studio

The DataSet
We also need data for AML to consume.  Cleverly, you can use the Reader module in AML to load data into an experiment from a Web URL, Hadoop and Hive and a number of Azure sources.  We’re just going to pre-load a CSV file in to AML.  


So, let’s take a sample set of fictitious SharePoint project costs as a CSV file, made up of an amalgamation of various priced objects and a final price for the implementation of @1000 projects.  
 

Figure 5 Dataset of project costs

Note:  The XLS file is also supplied, it shows how the sample costs were derived using formulae

The CSV contains a typical set of information regarding a SharePoint build:  Is it on premises or cloud hosted; what were the data volumes, did we need Azure backup, how many physical or virtual servers were required, how many use cases required configuration, if it was hybrid what was the complexity level, etc. etc.  Just like a real project cost analysis and building on experience, the more repeated examples of data you have, the more likely of an accurate prediction of a final implementation price. 

We will upload this CSV extract into the datasets section of ML Studio by adding a new Dataset.


Once we have our dataset loaded, we need to build an experiment and consume the dataset.  


Interestingly, you can easily view the dataset with the ”Open in Notebook” option at the bottom of the page which opens the Jupyter Python editor ( http://blogs.technet.com/b/machinelearning/archive/2015/07/24/introducing-jupyter-notebooks-in-azure-ml-studio.aspx ) and allows you to code around and filter the data in ML Studio.  We do not need that – but it demonstrates the power and investment in ML Studio as this is a fairly recent addition to ML.


If you view the dataset in Excel or in Jupyter (or right click and visualise) you may also notice there are no calculations performed within the dataset, and no formulae applied as this is an extract of just data, decision points and prices.   How then can you actually make a cost prediction?  Well AML provides lots of modules to choose from to help build experiments, but for this type of experiment we could use a simple Linear Regression module.


The AML manual (https://msdn.microsoft.com/en-us/library/azure/dn905978.aspx) suggests that we should use linear regression when you want a very simple model for a basic predictive task. Sounds about right as Linear Regression tends to work well on high-dimensional, sparse data sets lacking complexity – just like this one!  A good start perhaps?


The Experiment
Our experiment really couldn’t be much simpler to demonstrate how easy AML is to start experimenting with, and looking at how we might solve problems differently in this new AI world. We will build our experiment and we will do it with only drag and drop.
Using ML studio, create a new Experiment and select a Blank experiment.  It will appear similar to that shown below in Figure 7: 


 
Figure 7 Blank Experiment Screen


You will see, it is telling you to drag and drop and that’s really all we will do.  

Drag the Project Costs to the model, and then add the Clean Missing Data module.  If you can’t find it, just type project into the Search box and then drag it over.  To simplify using ML, Microsoft have created the Search Experiment Items box (top left) so all of the components can be found using search.


Why do we add a Clean Missing Data module, well prediction is based on consistency not guess work and if we have missing data in any single record it can unduly influence the output – so we remove any columns that might corrupt our results?  


Set the properties for the Clean Missing Data module and in the Cleaning Mode property choose to remove entire row:
There is no SAVE button, it auto-saves in the background for you.  There is also no UNDO button.


Replicate the entire experiment (as shown in figure 9) in ML Studio by dragging the component modules and replicating the connections beginning with the previously uploaded dataset.   
 

Figure 9 SharePoint Costing Experiment

Interestingly, AML also doesn’t seem to like labels for columns that have spaces or odd characters in them – but it won’t tell you that until you finally try and consume a service you’ve spent an age building!
At this point, you could have also added a Project Column Selector module
(https://msdn.microsoft.com/en-us/library/azure/dn905883.aspx), and limited the scope of inbound data to evaluate from the XLS. For simplicity, we will not do that as we need all inbound data anyway, but it is worth expanding out your experiment once you have the basics under your wing to see what differences you can make.


When you have the Train Model module added, you need to select a column from the dataset to focus on as output for the Linear Regression training activity.  Select the ProjectCost label via the properties window via the Launch Column selector.   


 
Figure 10 Launch Column selector

This is the single output we actually seek and will eventually be a return value for a web service that AML will automatically create for us to interact with the experiment remotely.


Training a model is like supervised machine learning. You provide a dataset that contains historical data from which AML can then learn patterns. The experiment uses the data to extract statistical patterns and builds a model which it then passes to the scoring module for consumption.  You don’t have to write any code to do this, although you can create R code (https://azure.microsoft.com/engb/documentation/articles/machine-learning-r-quickstart/) to make it do more if you need to do that.  You’ll also notice that we are only using one training model, and you can use as many as you like and different types of modules to strengthen the outputs once you understand the basics.  For this example, one training model is enough.


Once you have it all connected up, and none of the modules are showing warnings you are ready to press the RUN icon at the bottom of the ML Studio screen.  When you RUN an experiment you should expect to see lots of green ticks but that is it really. If you are sitting waiting to see something cool before you move on, you might be a bit disappointed. 


 
Figure 11  Test Run completed


You want lots of small green ticks to appear in the modules, indicating that AML has successfully executed that module.  If you get a RED warning in any module, then go back and check the connections and inputs.


Running the Experiment
So, we have data, and we have an experiment that will consume that data and make predictions.  However, we have no easy way to test that experiment at this stage.  We need to tell ML to add Web Service endpoints for us.


At the bottom of the ML editor screen, there is an icon entitled setup web service, and you need to select Predictive Web Service. 


When you select this, ML starts modifying your experiment and adds an entirely new “trained” Predictive Experiment based on your Training Experiment.  The Azure "help" dialog may also point out interesting things on the page if this is the first time you have done this.


Studio will add the web services end points so that you can actually interact with it as a remote consumer.  Your Studio should then look similar to that in figure 13: 


 
 Figure 13  Web Service End-Points automatically added to Predictive Experiment


You have now created an AML Predictive experiment in Azure, and you could even publish that to the AML Gallery to allow other users to consume it.  Give it a test RUN before you move to the next step. As before, you want lots of green ticks!


Testing our Model with Inputs
Not much use having a predictive engine, and a stack of legacy SharePoint project data to base predictions on unless we can interact with it.  AML also solves that problem for you as it doesn’t want the data scientists to be fiddling with code remotely when they should be spending their time building experiments.  AML provides a test harness via Microsoft Excel but to access it we need to deploy the web services we added to our experiment.


Select the Deploy Web Service icon and AML will create a secure interaction method, a secure API key for all you coders, and provides Excel and web based test interfaces.

So, you now have a predictive cost engine using a 1000 rows (albeit fictional data) of SharePoint project cost outcomes to experiment with, and predict future project cost outlay.


The web based TEST interface is very basic, if functional. I’d suggest you download the Request/Response Excel file, accept the security caveats when you open Excel and provide some values which it will pass to AML and update a price in real time in the Excel file.  Either method should suffice for testing.

 Figure 15Testing out Predictions

Try using the input values in the initial CSV upload and see if you can get both predicted and accurate output.  In the example shown in figure 13, if we replicate the first line from the CSV as input using the TEST web service it will predict an accurate value of 190275 for those project variations for the ProjectCost output.  Not bad considering we’ve not written a line of code!  

 
Compare variations on your input to the values in the example spreadsheet containing the calculations of how the project costs were derived, see if you can get AML to predict results that are not in the actual CSV values.  


Of course, you should not use this to "actually" predict SharePoint project costs for customers. It is an example only and not representative of the actual complexity of calculating SharePoint project costs.  Hopefully it has inspired you to go and try AML to build yourself a real costing engine. 


Time to play!
https://azure.microsoft.com/en-gb/documentation/services/machine-learning/
Welcome to the new and very much improved world of Data Science.

About the author
John has been in the industry for over 20 years and is currently a Managing Enterprise Architect at Capgemini.  John specialises in Enterprise Assurance on large scale Microsoft programs of work, typically involving hundreds of thousands of users, with a strong focus on SharePoint and Office 365, PaaS, IaaS, Azure and Hybrid.  He is an international conference speaker and has in his past life co-authored a number of technology related books on fields as diverse as SharePoint, .Net and Java development.  John manages the North East branch of the SharePoint User Group in the UK. Follow him @jtimney.

 

MVP Global Summit Friday Fives!

Thanks for a great MVP Global Summit!

$
0
0

Thank you to all the MVPs who converged on Microsoft World Headquarters last week to make this the biggest and one of the best MVP Global Summits ever!
 
Nearly half of the 4,000 worldwide MVPs took time away from their very busy work and home lives to sit down and join in deep, technical conversations with hundreds of Microsoft engineers and executives from November 2nd through 5th. And on the Saturday before the Summit even started, more than 20 members of the community gathered in a wet field northeast of Redmond to help collect produce for nearby foodbanks. Organized by Visual Studio and Development Technologies MVP Paul Litwin, the post-harvest gleaning for Hopelink surpassed the organization’s goals for the day.



The 2015 MVP Global Summit officially kicked off with a warm welcome to the community by Steve Guggenheimer (Guggs), corporate vice president of Microsoft's Developer eXperience & Evangelism (DX) group. Then the community launched into more than 600 highly technical, NDA sessions all over the Microsoft campus. They were joined by engineering and marketing teams from across Microsoft, including executives such as corporate vice president of the Developer Division, Brian Harry, corporate vice president of the Microsoft Azure team, Jason Zander, and corporate vice president of the Enterprise Client & Mobility team, Brad Anderson.


Throughout the event, members of the community shared their tips, tricks and technology perspectives in interviews with Channel 9—both in studio and at a pop-up set at Microsoft’s Executive Briefing Center.

Feedback on Twitter for the 2015 MVP Global Summit has been overwhelmingly positively:

 Evaluations for the event are underway. If you were at the Summit and haven’t filled yours in yet, there is still time! We’ll be taking feedback until this Friday the 13th—go to this link.


Thank you again to all the MVPs who made this such a great Summit. If you weren’t able to come this time—we missed you! And we hope to see you next year.

And now, your Friday Fives!

$
0
0


Cloud and Datacenter Management MVP Deb Shinder @debshinder: Herding Cats in the Cloud

Cloud and Datacenter Management MVP Richard Hicks @richardhicks: Enterprise Nirvana with Surface Pro 4, Windows 10, and DirectAccess

Business Solutions MVP Yaniv Arditi @YanivRDT: Why Should You Always Debug with the Plugin Registration Tool

Windows and Devices for IT Kevin Kaminski @kkaminsk: Microsoft Most Valuable Professionals

Business Solutions MVP Paul Mather @pwmather: #ProjectOnline Resource Managements Feature Part 2
 

 

MVPs from around the world come together again for MMS 2015 in Minnesota!

$
0
0

Once again MVPs were front and center at the second annual Midwest Management Summit (MMS) 2015, a premier systems management community event which was held last week on November 9, 10 and 11 in Minneapolis, Minnesota.

MVPs from across the United States and around the world were involved in making the event a huge success! There were a total of 138 sessions, which also included early morning birds of a feather sessions. MVPs made up well over half of the 75 speakers roster, which also included Microsoft engineers. The community-driven event attracted more than 500 attendees.

Here’s what some of the attendees had to say:

 “Friendly, knowledgeable and approachable MVPs giving back to the community” "The pool of speakers were great. I like that we have a diverse set of people that work with customers everyday and share some of their experiences. It's also great that there is a strong Microsoft presence at MMS. It makes for a very well rounded conference.”

  “Great content, great networking, truly a fun/engaging/educational event”

  “An amazing opportunity to hear from the community’s biggest contributors.

  “Great content, great speakers, great way to meet people who have experience in these products we all so passionate about.”

  “The in depth technical knowledge available. Most presenters were excellent. The collection of MVPs, MS engineers combined with the detail level of presentations isn't something you get at every conference.”

  And here’s feedback from some MMS 2015 speakers:

  “Small event, everybody can talk to each other and the Q&A makes the event special.”

  “The attendees are a really interactive and happy group. Makes presenting and attending sessions a lot more engaging and fun.”

  “You're doing a fantastic job at reviving what was great about the Microsoft MMS conference that no longer exists. It's small enough to bring high value to the attendees and the speakers. And the audience was ENGAGED and at an appropriate skill level for the content I was presenting! I have delivered content to crowds that didn't volunteer responses to the questions asked, but at this event, the Q&A and discussions between speakers and attendees was just wonderful!”

Midwest Management Summit organizer Enterprise Mobility MVP Brian Mason also just announced the MMS conference will return to the Mall of America next year on May 17, 18 and 19,. “This helps get us away from the cold of November, gives us the opportunity to teach folks how to stay current with Win10, and we avoid conflicting with other conferences held during the year,” Brian explained.

Check out #mmsoa on Twitter for more information and photos from the event

#FridayFives!

$
0
0

Cloud and Datacenter Management MVP Doug Finke @dfinke: More Data Wrangling with PowerShell


Cloud and Datacenter Management MVP Trevor Sullivan @pcgeek86: Authoring Custom PowerShell Intellisense with Trevor Sullivan


Office Servers and Services MVP Haarón González @haarongonzalez: Installing SharePoint 2016 Preview in Single Server Farm Mode


Cloud and Datacenter Management MVP Cameron Fuller @ CFullerMVP: Targeting OMS solutions to specific systems in Operations Manager


Visual Studio and Development Technologies MVP Richard Hundhausen @rhundhausen: Microsoft, please give us a Scrum process template

Japanese MVPs Help the Enterprise Get the Most from Office 2016

$
0
0

MVPs helped deliver a great Office 2016 learning experience to Microsoft partners and customers recently during an Enterprise Ready Workshop in Japan.

More than 100 participants from a range of industries—including representatives from Konica Minolta Inc., Yamaha Motor Co. and ITOCHU Corp.—gained insights from 15 Japanese Office MVPs. The ten highly interactive sessions focused on the features and capabilities of Office 2016.

The workshop was extremely well received and many participants asked for more sessions like it in the future. 


Automating Deployment on Azure Web Apps

$
0
0

Editor’s note: The following post was written by Windows Development MVP Matias Quaranta as part of our Technical Tuesday series.

In today’s agile development teams we require more than the old deployment methods like FTP or WebDeploy where one member had the “Deploy” task on the delivery pipeline. I’ll go as far as to say that an automated integration and deployment strategy is a necessity to any team that wants to focus on what really matters, writing code and building products.

Opening concepts

Before we talk about Azure’s implementation and mechanics, let’s define clearly the concepts of Continuous Integration, Continuous Delivery, and Continuous Deployment. On a common scenario, the development team interacts with a source control repository by committing changes to the repository files, Continuous Integration refers to automating the processes involved in integrating the changes that each developer makes into the repository and verifying that these changes don’t break the project. Once the project is validated, Continuous Delivery involves automating the process by which the project’s code is rapidly and safely deployed to a staging environment where you are confident that, if you choose, that code can be deployed on a production environment. Continuous Deployment refers to automating this last step and ensuring that the code is delivered to production without incurring in downtime (or reducing this downtime to a minimum without breaking a possible SLA).

Integrating repositories

From the beginning, the Azure Web Apps service allows for automatic integration with source control repositories (GitHub, BitBucket, Visual Studio Online among the most used). This integration allows Azure to detect changes that development teams commit by registering WebHooks on said repositories. These WebHooks notify Azure once the selected branch has received new commits/pushes, Azure pulls the latest source code version and proceeds to compile / deploy it.
To create this integration, we just need to go into the Settings panel of our Azure Web App and select “Continuous Deployment”.


 
On the next screen we can select which provider suits us better, validate our credentials and pick the branch of the repository on which we want the deploy process to be based on. 


 
Once configured, Azure will make a first pull of the source code, detect what kind of Web App it is (.Net, Node.js, Python, etc.) and build / validate it. Note that this validating process does not (at this time at least) run any kind of Unit Testing, it will basically try to “build” the project, if the building fails, that version of the source code won’t be deployed. 
 

Once the first deploy is done, we have successfully configured an automated integration from our repository to our Web App, future commits/pushes on the repository will trigger new pulling/building/deployment process. One interesting feature is that Azure allows us to go back to previous successful deployments if we need to. 


 
The responsible for this process is called Kudu, and it’s the environment on which Azure Web Apps “live”. Kudu is an Open Source project whose main objective is the automation of Git/Mercurial deployments on Azure, but also provides extensions that add additional features like logging and analytics or integration with third-party services.

Introducing Deployment Slots

We might be tempted to think this is the correct way of creating a Continuous Deployment pipeline for a production level application, but we would be making a potential mistake, if we go back to our concepts, we’ll see that to achieve Continuous Deployment we first need to achieve Continuous Delivery.
The scenario that we just configured will deploy new code versions directly to our production environment, but like we probably know by experience, a fresh application requires some sort of initialization sequence on its first requests, this is known as a cold start and can probably cause downtime when it occurs.
We need to deliver our code to a staging environment before, that’s where Deployment Slots come in. With Deployment Slots you can create environments that run in the same instances as your Azure Web App (no additional cost) but have different URLs, storage, application and integration settings. You can effectively create different versions of your Web App, each integrated with different repository branches if you needed.
In our scenario, we’ll create first a Deployment Slot called staging: 


 
Then, we’ll set the integration with our repository like we did before. You might wonder, if we leave both production AND staging integrated, changes will be deployed to both, and you’d be right. That’s why we must Disconnect the repository from our production environment: 


 
Once done, all code changes will be detected by our staging environment, deployed there and we can test and view them by accessing their own private URLs (commonly the URL pattern is http://mysite-{slot_name}.azurewebsites.net). If we are happy with the changes, we can Swap the slots, effectively sending the Staging version to Production and achieving Continuous Delivery. 


 

AN IMPORTANT NOTE ABOUT SWAPPING

Up until this point, we managed to integrate our repositories and configure slots to achieve Continuous Delivery. The next step is to automate the Swapping process. On the Settings blade, you’ll find the Auto Swap feature. 


 
The Auto Swap process will kick in once the pull from the repository is built and ready. It will first make an HTTP Request to the Web App in our staging slot and wait for an HTTP OK (200), if it receives it, it will automatically swap the slot contents with the selected destination (production most likely). This HTTP Request is done to provide a warm up process of the Web App and avoid cold starts once the swapping is made.

Conclusion
Following these steps you will effectively achieve Continuous Deployment and incur no extra charges on your Web App, creating a failsafe deploy pipeline and letting your development team focus on your core activity.

About the author
Matias is a Microsoft MVP, Azure & Web Engineer, open source contributor and firm believer in the freedom of knowledge.

 

What’s new in App-V version 5.1

$
0
0

Editor’s note: The following post was written by Windows and Devices for IT MVP Ralph Jansen as part of our Technical Tuesday series with support from his technical editor, Windows and Devices for IT MVP Tim Mangan.

App-V 5.1 is the latest and greatest version of App-V and is released with Microsoft Desktop Optimization Pack 2015 R1.

Microsoft Application Virtualization (App-V) eliminates application installations and abstracts applications from the operating system to prevent application conflicts. Virtual applications appear to the user just like any other application.

It transforms applications into centrally-managed virtual services that do not conflict with other applications. This accelerates application deployment, upgrades, patching and retirement by eliminating time-consuming processes and simplifying the application management lifecycle.

Version 5.1 is built on many of the features introduced in version 5 and offers several excellent new features. The following list summarizes the new functionality in 5.1, when compared to 5.0 SP3.

Support for Windows 10

Windows 10 is the latest Windows version released in mid-2015. The App-V client, RDS client and Sequencer are supported from version 10.0.10041.

The Management Console User Interface no longer requires Silverlight but is built on HTML5 and JavaScript

Prior to App-V 5.1, the Management GUI was built on Silverlight. In 2013 Microsoft announced that Silverlight will be discontinued and will be replaced by HTML5. In version 5.1 the Management GUI is rewritten in HTML5 and JavaScript and has a modernized theme. The advantage is that it’s fully compatible with Internet Explorer, Spartan, Chrome and Firefox without having to install any additional plugins. 
 

Better navigation in the Management Server console
The Management Server console adds several improvements to simplify navigation. Every console page now has its own URL, making it easier to bookmark pages for quick access in the future.
Admin: http://ManagementServer/Admin
Servers: http://ManagementServer/Servers
Connection Groups: http://ManagementServer/ConnectionGroups
Packages: http://ManagementServer/Packages

Every package and Connection Group now has a number so it can easily be bookmarked. For example: http://localhost:8080/Console/Packages/Package/3006
http://localhost:8080/Console/ConnectionGroups/ConnectionGroup/1005

The Connection Groups page is now part of the main menu of the Management Server console, at the same level as the Packages page.

In the previous version, the Connection Groups icon was displayed when clicking on the packages part of the menu. This has been replaced by a separate icon in the main menu.

Optimization of the Management Server console.

The new Management GUI has been optimized. Buttons have moved and notifications are displayed next to a flag icon. It now has a more logical interface for virtual application delivery, managing Connection Groups and changes the way it displays alerts; it also gives you more control over those alerts and messages, letting you choose the messages you want to see.

Icons in the Notification area may occasionally display messages in a flag icon balloon. Many messages just provide some feedback and don’t require any response from you, as in the example in the next figure. That kind of message generally fades away on its own after a few seconds. But you can also close the message by clicking the Close (X) button in its upper-right corner or the Dismiss All button. 
 

Enhanced export and import functionality within the sequencer

One of the new features of the App-V sequencer is the ability to import and export files or a directory to the virtual file system when editing a package in the sequencer.

On the Virtual Registry tab it is possible to import a .REG file with registry keys. The new Replace feature makes it easy to replace a value in the virtual registry, previously, the “Replace” dialog box did not exist, and you had to make changes manually. 

 From the Advanced tab it is a now possible to import and export the AppxManifest.xml file. This can be used to make changes like removing shell extensions or edit file types.

Enable or disable Browser Helper Objects
A Browser Helper Object (BHO) is a plugin for Microsoft's Internet Explorer to provide added functionality. When a BHO is recognized by the sequencer it can be disabled from the Advanced tab.

App-V 5.1 supports the use of multiple scripts on a single event trigger for App-V packages
App-V 5.1 supports the use of multiple scripts on a single event trigger for App-V packages, including packages that you are converting from App-V 4.6 to App-V 5.0 or later. This setting enables greater flexibility by the use of multiple scripts.

The single event trigger uses a script launcher application, named ScriptRunner.exe, which is installed as part of the App-V client installation to run multiple scripts. 
 

When using ScriptRunner.exe, make sure that the location where ScriptRunner.exe is located is in the PATH variable of the system.

You can now enable that only Administrators can publish App-V packages
App-V 5.1 has a new option, RequirePublishAsAdmin, that enables that only users with administrative rights can publish applications. This offers granular control over who can publish packages on a system.

Package conversion improvements
The package converter, that can convert packages from the 4.x version to the 5.x version, is enhanced so it can convert scripts and registry information from source .OSD files.

You can now add information from a subset of the .OSD files in the source directory to the package using the –OSDsToIncludeInPackage parameter.

These are just some of the new and enhanced features in App-V 5.1. With the new features, it is undoubtedly a marked improvement over previous versions.

About the Author
 
Ralph is a Technical Consultant from the Netherlands. He has been working with Application Virtualization since 2007 and expanded his knowledge in the past years. Since 2011 Ralph shares his knowledge through his own blog ( http://ittechlog.wordpress.com ) where he posts technical articles. In 2012 he was the winner of Microsoft's App-V "Package Accelerators Contest". Ralph spends as much time as possible with the App-V community through Twitter (RalJans) and Technet Forums, where he helps and keeps in touch with fellow Application Virtualization engineers. His contributions to the community led to an MVP award in April 2014. In 2015 his first book came out, Microsoft Application Virtualization 5.1: Learn to design, implement, upgrade and configure App-V 5.1

 

 

Cloud and Data Friday Fives!

$
0
0

Cloud and Datacenter Management  MVP Matt Graeber @mattifestation: Investigating Subversive PowerShell Profiles


Cloud and Datacenter Management  MVP Symon Perriman @SymonPerriman: How to Build your Business using the Microsoft Partner Network


Cloud and Datacenter Management  MVP John Joyner @john_joyner: Manage Linux Computers from Microsoft Azure with OMS Agent for Linux


Cloud and Datacenter Management  MVP Jeremy Moskowitz: Wubba heck is WUB (Windows Update for Business)


Cloud and Datacenter Management MVP Deb Shinder @debshinder: Managing Azure VMs with System Center Virtual Machine Manager 2012 R2 (Part 3)

MS FEST 2015 rocked the Czech Republic!

$
0
0

Organized by Microsoft Most Valuable Professionals (MVPs) and Microsoft Student Partners (MSPs) for the ninth time this year, the biggest technical conference in the Czech Republic, MS FEST, took place on October 10-11 in Prague and November 28-29 in Brno. 

Held at two popular universities in the country’s largest cities, the event attracted 850 participants and delivered 97 deep technical sessions as well as three hands-on workshops. Fifty-nine MVPs, MSPs, and members of Microsoft delivered content along 19 tracks. The event, which featured Microsoft devices, products and technologies, as well as online MVA content, was well received by the students.

In the spring of 2016, this conference will be held in the same format in Bratislava, Slovakia as a joint event of Czech and Slovak MVPs and MSPs.

All sessions were recorded and most of the videos are already available on Channel9.

 

 

 

 

MVP Global Summit Friday Fives!

Viewing all 788 articles
Browse latest View live




Latest Images