Attn! Always use a VPN when RSSing!
Your IP adress is . Country:
Your ISP blocks content and issues fines based on your location. Hide your IP address with a VPN!
Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Independent Experts. Real World Answers.

older | 1 | .... | 34 | 35 | (Page 36) | 37 | 38 | .... | 40 | newer

    0 0

    Editor’s note: The following post was written by Windows Platform Development MVP Sam Basu.

    If you have been following the Microsoft developer space of late, you may have thought to yourself - “It’s a new Microsoft.” These realizations were none the more evident than if you attended the recent //BUILD developer conference. Change is in the air for Microsoft developers and it seems all for the better.

    Sure you love the richness of Visual Studio as an IDE and the comforts of managed .NET code. But, just to get started, you have to download and install Visual Studio (several Gigabytes) and setup your developer environment, all before writing even a “Hello World” application. Compared that to the experience of a college kid who may start programming with just a lightweight text editor on a Mac. It’s time to lower the barrier to entry and invite everybody else to our beautiful garden of .NET.

    Enter Visual Studio Code– a free, lean, cross-platform version of your beloved Visual Studio. With a lightweight installation, you can enjoy most full-featured benefits of Visual Studio on a Windows, Mac or Linux machine. This article highlights 10 of the compelling features of VS Code, which aims to be your one-stop rich code editor across any developer platform.

    1. @Code Handle
    One of the most awesome things about VS Code is not actually a feature of the editor itself. VS Code was launched during //BUILD 2015 with an accompanying @code Twitter handle – how cool is that? Let us leave it to your imagination on how Microsoft pulled off securing that awesome social brand.

    2. Huge Language Support
    Not only is VS Code available cross-platform, it aims to be your one-stop code editor with support for 30+ programming languages out-of-the-box. Which ones you ask? You get to write C#, VB, JavaScript, HTML, CSS, TypeScript, Ruby, Objective-C, PHP, JSON, Less, Sass and Markdown, just to name a few. You get built-in syntax highlighting and bracket matching in your code, as well as easy code navigation. VS Code aims to be a first class IDE of choice for Node.js, ASP.NET and TypeScript development, aimed at modern web/cloud applications.

    As for ASP.NET 5 development, VS Code is already a rich editor with most everyday features you expect built-in. Simply use the available Yeoman generators to scaffold your ASP.NET 5 web application, and point VS Code to the root folder. You’ll find a feature rich IDE (sample project screenshot below) with support for Omnisharp-powered code IntelliSense, errors, commands, deployment and integrated version control. All of this is designed to keep you in VS Code all day long for your modern, lean, cross-platform ASP.NET 5 web development needs.

    3. Side-by-Side Editing
    VS Code supports one of the most sought-after developer requests – editing code side by side. Simply Command (⌘) / Control click on a file from your project explorer and, voila, multiple files open up side by side, pre-docked and ready for edits. VS Code can support up to three simultaneous file edits, each of which can be launched from the command prompt as well. The screenshot below shows two files from an ASP.NET project being open for edits side by side – life is good.

    4. Always-On IntelliSense
    You wouldn’t want to be writing .NET code without the comforts of Visual Studio IntelliSense, right? VS Code knows this and tries to please with out-of-the-box IntelliSense. Sure you get friendly prompts on language features, but also smart IntelliSense that is local context aware in your custom code, as demonstrated below:

    5. Command Palette
    One of the powerful utilities of VS Code that you may want to keep handy is the Command Palette – simply hit Command/Control+Shift+P (⇧⌘P) to bring it up. Think of it as PowerShell inside your IDE, ready to serve up commonly used tasks with a shortcut, as the below screenshot demonstrates.

    Furthermore, if your development project supports special command line tooling, VS Code Command Palette is smart enough to understand and offer help. For example, in my ASP.NET 5 web application, the Project.JSon file lists out some specific DNX (Dot Net Executable) commands, as shown below. When you start typing any of these recognizable commands, the Command Palette provides IntelliSense help and even fires up the Command Prompt to execute tasks, as demonstrated.

    6. Function Parameters
    One of the nifty IntelliSense features of VS Code is helping out with typed function parameters during corresponding invocation, thus allowing easy code completion. This obviously works with programming language/system functions, but is also smart enough to offer help on custom functions that you have in your code. As the screenshot below demonstrates, function invocation offers up typed parameters, if VS Code sees the function definition elsewhere in the project. Pretty slick.

    7. Peek Information
    As developers, you may often want to peek at a different file or check a function definition. But a full-on context switch can be expensive, since it takes you away from what you were about to type. Enter Peek help from VS Code. Simply Right Click or (⌥F12) – and Peek Definition is here to help, as shown below. The results of the peek are embedded inline and the inline peek information goes away on hitting Escape, saving you the big context switch.

    8. Markdown Preview
    Most developers have their favorite Markdown editor – quickly type up Markdown and see the HTML preview. Well, VS Code has the intention to take over your Markdown edits, and has the goods to back the claim. You get solid Markdown edit support and when ready, simply fire up ⇧⌘V to see the preview, as shown below! There are also buttons to help you toggle between edit/preview, as well as putting the preview window side by side to see your changes in action.

    9. Debugging
    Developers are in an edit/compile/debug loop as they are writing code all day, and any modern IDE is incomplete without solid debugging support. VS Code already has solid support for Node.js development and aims to have a first-class debugging experience for ASP.NET 5 development, which is coming soon. To debug your applications in VS Code, you would first have to set up your launch configuration file – this documentation shows you how.

    Once set, you can switch to the debug mode on the Viewbar (as shown below) and you are able to either launch your app from VS Code or attach to a running app. You can set breakpoints, look into call stack or variables at run time, and pause or step through code execution – in all, a complete debugging experience to keep you in VS Code all day.

    10. Integrated Version Control
    It would painful is developers had to step outside their code editor to accomplish version control tasks – so VS Code has Git integration built-in. VS Code works with any Git repo – local or remote, and offers visual cues to resolve conflicts before code commits. Simply start up the version control from the Viewbar on left, point to your GitHub repo and off you go. VS Code will start tracking files for changes and offer stage/unstage/commit actions for your code, as shown below. You get a complete version control experience without having to ever leave VS Code.

    Conclusion
    Let’s not be resistant to change – you’re not losing your rich Visual Studio development environment – but what VS Code offers is a breath of fresh air. It is a lightweight yet full-featured code editor that is available on any platform – just how cool is it to have Visual Studio on OSX and Linux! VS Code is just getting started – expect the ecosystem to grow by leaps and bounds, aiming to be the only code editor you need for most types of development.

    About the author
    Sam Basu is a technologist, Apress/Pluralsight author, speaker, believer in software craftsmanship, gadget-lover and Developer Advocate for Telerik. With a long developer background, he now spends much of his time advocating modern web/mobile/cloud development platforms on Microsoft/Telerik stacks. He passionately helps run The Windows Developer User Group, labors in M3 Conf, serves as INETA Secretary, and can be found with at least a couple of hobbyist projects at any time. His spare times call for travel and culinary adventures with the wife. Find out more at http://samidipbasu.com. Follow him on Twitter @samidip.


    0 0

    Editor’s note: The following post was written by Windows Platform Development MVP Guillaume Leborgne

    What is an App Service ?
    Windows 10 introduced a bunch of new ways for applications to communicate with each other. One way is to implement "App Services." App Services is a request/response model where one app can call a service located within another app. App Services enables communication between apps, but also with the system. If you want to implement interactive scenarios with Cortana, you will have to implement an App Service to provide data to Cortana.

    If you're the kind of person who prefer code rather than blah blah, you may head directly to the github repository with sample applications.

    App Services use the same infrastructure as BackgroundTasks, and most of the logic and implementation details still apply. It means that when your service is called, you don't have the whole application running, but only your service. It also means that your application doesn’t communicate directly with your App Service. For example, your application does not get notified when your service is called or terminated.

    In all resources I can find (Build session, Channel 9 videos, samples, etc.), App Services are implemented in C#. Those resources are really helpful (especially this one on Channel 9), but if (like me) you are writing apps in HTML and JavaScript, it is likely that you prefer writing those services in JavaScript and share business code with the rest of your application. Porting C# resources to JavaScript is actually very easy. In this post, we will dive into implementing an App Service in Javascript, based on a C# sample from Microsoft Virtual Academy.

    Show me some code!
    In a Windows Web Application (Windows application written in HTML and JavaScript), a background task, therefore an App Service, should be thought of as a special Web Worker (no postMessage with it unfortunately). It's a standalone JavaScript file that will get caught independently by the system.

    The first step to implement your App Service is to create this file. As with web workers, you could use "importScripts" to reference any code you want to share between your app and the service. Be aware that, like with Web Workers, there is no "window" or "window.document" objects inside your background task or app service. The global context points to a completely different beast, and there is no DOM.

    Inside your task or service, you will access the current instance of the BackgroundTask object using WinRT APIs, and get a deferral on it to control the lifecycle of your service. As with background task, your service can also be canceled by the system if it needs to recover memory, or if battery is running low on the device. Your task instance provides a "cancel" event that will get caught in such cases.

    A minimalistic background task/app service would look like this
    var backgroundTaskInstance = Windows.UI.WebUI.WebUIBackgroundTaskInstance.current;
    var triggerDetails = backgroundTaskInstance.triggerDetails;
    var bgtaskDeferral = backgroundTaskInstance.getDeferral();

    function endBgTask() {
        backgroundTaskInstance.succeeded = true;
        bgtaskDeferral.complete();
        bgtask.close();
    }

    backgroundTaskInstance.addEventListener("canceled", function onCanceled(cancelEventArg) {
        return endBgTask();
    });

    Now we must declare this file as an App Service. For that, we must add an extension to our application in its manifest, pointing to our JavaScript.

    <Applications>
        <Application Id="App" StartPage="default.html">
            ...
            <Extensions>
                <uap:Extension Category="windows.appService" StartPage="js/appservice.js">
                    <uap:AppService Name="MyJavascriptAppService"/>
                </uap:Extension>
            </Extensions>
        </Application>
    </Applications>

    As you can see, we provide the path to our JavaScript file, and we are giving a name (MyJavascriptAppService) to the App Service.

    Now we must implement the service logic. To do that, we will check the trigger details on our background task, and register for a request event. When the event gets activated, we receive an App Service request. This request will contain a message (with request arguments), and a sendResponseAsync method to reply to the caller. On both sides, the values in the request and in the response are provided with a ValueSet object.

    //check that the app service called is the one defined in the manifest. You can host
    //multiple AppServices with the same JavaScript files by adding multiple entries in the application manifest
    if (triggerDetails && triggerDetails.name == 'MyJavascriptAppService') {
        triggerDetails.appServiceConnection.onrequestreceived = function (args) {
            if (args.detail && args.detail.length) {
                var appservicecall = args.detail[0];
                //request arguments are available in appservicecall.request.message
                var returnMessage = new Windows.Foundation.Collections.ValueSet();
                returnMessage.insert("Result", 42);
                appservicecall.request.sendResponseAsync(returnMessage)
            }
        }       
    }

    Calling your App Service
    The app calling your service can be any app. If you want to restrict access, you will have to implement your own security mechanism. As you may have understood, the caller and the callee don't have to be written in the same language. You can call a service written in C++ from a JavaScript app. All data is passing through Windows APIs.

    Calling the app service require some arguments: the caller should provide the package family name (PFN) of the target application, and the name of the App Service (as declared in the target's app manifest). If you don't know your PFN, you can get it through WinRT APIs by calling "Windows.ApplicationModel.Package.current.id.familyName" in your service application.

    Using the PFN and service name, you will first get a connection to your App Service, and register to the "serviceclosed" event to be notified if your service terminate.

    function getServiceConnection(){
        var connection = new Windows.ApplicationModel.AppService.AppServiceConnection();
        connection.appServiceName = "MyJavascriptAppService";
        connection.packageFamilyName = "...your PFN here...";

        return connection.openAsync().then(function(connectionStatus){
            if (connectionStatus == Windows.ApplicationModel.AppService.AppServiceConnectionStatus.success) {
                connection.onserviceclosed = serviceClosed;
                return connection;
            }
            else {
                return WinJS.Promise.wrapError({ message: 'service not available' });
            }
        });
    }

    Once you get a valid connection, you will be able to send requests to the service
    function callService(){
        return getServiceConnection().then(function (connection) {
            var message = new Windows.Foundation.Collections.ValueSet();
            message.insert("Command", "CalcSum");
            message.insert("Value1", 8);
            message.insert("Value2", 42);

            return connection.sendMessageAsync(message).then(function (response) {
                var e = response;
                if (response.status === Windows.ApplicationModel.AppService.AppServiceResponseStatus.success) {
                    document.getElementById('result').innerHTML = 'calculated ' + response.message.Result;
                    
                    return response.message;
                }
            });
        });
    }

    And voilà! You're ready to go. A picture is worth a thousand words, so I put a sample with service and caller apps on github for you.

    Debugging your service
    If you grab the sample, you can see how easy it is to debug your service. If you configure the solution to run both caller and callee on debug, you can set breakpoints in your app service. If you don't want to run the full service app, you could also edit the properties of the project hosting the service. In the debugging section, you could set "Launch Application" to false. In that case, when you run debug for both apps, you will only see the caller application starting, but your breakpoints in the app service will get called appropriately.

     

    About the author

    Guillaume Leborgne focuses on Windows client development, HTML5 and ASP.Net, and is an active speaker, webcaster, blogger and mentor.


    0 0

    Created by 18 MVPs and other community members, C.E.U.S by Iberian SharePoint Conference staged its largest technical event last week in Madrid, Spain. Designed to bring together leading experts in technology and productivity platforms (including SharePoint, Skype for Business, Exchange, Office 365 and Microsoft Azure) from the Iberian Peninsula and Latin America, the conference offered Microsoft’s latest vision of Microsoft productivity tools, as well as the cloud and devices. Participants gained the opportunity to connect with real-world experts in technologies such as Office 365, Microsoft Azure and SharePoint. The event organizers managed everything from preparation logistics to the delivery of business and technical content.

    The first day of the event showcased two business-oriented keynotes and six workshops on cloud applications, lifecycle management, cloud IT, power BI, and business productivity. On the second day, MVPs from Spain, Belgium, France, Italy, Netherlands, Portugal, Colombia, Uruguay and Costa Rica, as well as other community members, took the stage to deliver 48 technical sessions.


    This year, for the first time, the conference offered an ongoing Expert's Roundtable where MVPs were available to the local audience for Q&A sessions throughout the second day, which attracted many attendees. The conference also became a showroom for local partners: 35 local sponsors were present at the conference and organized the goody quiz that raised a lot of interest among the attendees.


    0 0

    Editor’s note: The following post was written by Windows Platform Development MVP Oren Novotny
    [Disclaimer: Many of the things I talk about here may not work in the RC of Visual Studio 2015. The information is taken from Microsoft’s public repos on GitHub and from conversations with members of the .NET team. The information herein is accurate at the time of writing but as with everything pre-release, things may change!]


    Intro
    A few days ago, I posted an article trying to explain my current understanding of how the new .NET Core libraries fit into the existing ecosystem. Since then, I’ve had more conversations with a few people on the .NET Team (many thanks to David Kean and Eric St. John!) that clarify the meaning of the dotnet target framework and how the pieces all fit together. This blog will attempt to explain further.


    TL;DR
    dotnet is not a specific target framework—it means “I’m compatible with any target framework that my dependencies are compatible with.” Read on for more.


    Let’s start at the very beginning (a very good place to start!)
    To help explain where things are going, it helps to have some background for context. Before we had any such thing as Portable Class Libraries (PCLs), if we wanted to use a library on multiple frameworks, we had to compile it multiple times. The figure below illustrates the state of the world circa 2010.
     
    Figure 1: Before PCLs

    The only real strategy for code sharing was to use linked files and many #ifdefs, as there were wide differences in capabilities between the frameworks. A solution would contain multiple projects, one per target framework. Each project would contain platform-specific references and would generate a binary compatible only with its target platform. This situation was not scalable as future frameworks and platforms would only lead to even more file linking.


    The birth of PCLs
    In early 2011, Microsoft released the first version of Portable Class Libraries as a toolset for Visual Studio 2010. These tools allowed creation of single binary targeting the .NET Framework, Silverlight, Windows Phone 7 and Xbox 360. They accomplished this by finding the lowest common denominator of functionality shared among the target frameworks. The available functionality changed to match your selection:
     
    Figure 2: The original PCL target frameworks

    From this early start, the tools grew over time. Visual Studio 2012 included support for PCLs without the need for an add-in. The list of target frameworks and versions increased; now you could choose .NET Framework 4 or 4.5. You could choose Silverlight 4 or Silverlight 5. Windows Phone gained options for 7.5, 8 and 8.1. We saw support added for additional platforms like Windows 8 and 8.1 Store applications. In 2013, Windows Phone App 8.1 made its first appearance. In early 2014 Xamarin added support for Portable Class Libraries, providing additional target frameworks for their iOS and Android platforms.


    Making the sausage
    They say that if you enjoy eating sausage, you should never see how it’s made. I personally don’t find ignorance to be bliss and strive to understand how things are made. The same could be said for PCLs—don’t look under the covers unless you’re prepared for what you may see! As one might imagine, there’s quite a bit going on to enable PCLs. In the current system, there are really two main components: contract assemblies and profiles.

    Contract Assemblies
    Contract assemblies are a special kind of assembly that contains types/metadata but no actual implementation. Think of this as a compile-time reference. A library can reference one or more contract assemblies and the compiler will use the type information in the file. At runtime, when a type is requested from the contract assembly, the loader sees either a TypeForwarder pointing to a concrete implementation or assembly metadata indicating redirection is allowed for the library. The indirection enables types to live in different assemblies in the implementation (think Silverlight vs .NET) but be referenced from a single dll. It also enables the runtime to substitute one type for another even if the assembly versions don’t match.
    The best way to think of a contract assembly is like a promise that a specified surface area is present. Your library can reference that assembly and then it’ll run on any target framework that implements that contract. Not all target frameworks support all versions of a particular contract. When working with a least-common-denominator based system, like PCLs, you’ll see fewer types available when you check more/older target frameworks. What Microsoft has done is pre-generate all of the permutations of those checkboxes so that you have a contract assembly for each possible option.


    Profiles
    That leads us squarely into PCL profiles. These are the things like Profile259 or Profile78 that people most associate with PCLs. In order to support every permutation of target frameworks that you, as a library author, want to choose, Microsoft pre-computed over fifty profiles. The profiles are collections of contract assemblies that represent the intersections of the public surface area from the targets. What people really mean by saying Profile259 is that they’re targeting .NET 4.5, Windows 8, Windows Phone 8 Silverlight and Windows Phone 8.1. The number is just a shorthand for spelling out each target framework. It was never really the intent for the profiles to be what people talked about, it was always supposed to be about the target platforms.
    What each profile represents, then, is a set of contract assemblies supported by a set of target frameworks. The profiles, in sum, represent every combination of possible contract assemblies. Taken one step further, what ultimately matters to a library isn’t the target framework; rather, what matters to a library are the contracts available to it through the selected set of target frameworks. The profile itself is just a transitive way to get that set of contracts.


    Enter the NuGet
    It’s not possible to have a complete discussion about PCLs without mentioning NuGet. In parallel to the rise of PCLs, community support was growing around using NuGet (and its package format by extension) as the de facto way of distributing library components. One of a NuGet’s key features is the ability to support multiple target platform versions within a single package. NuGet accomplishes this by using Target Framework Monikers (TFMs) that represent each platform. For example, net means .NET Framework, wp is Windows Phone and netcore is Windows Store. NuGet adds a version number to the TFM so that we get the common usage: net45, wp8, netcore451, which translates to .NET 4.5, Windows Phone 8 and .NET Core 4.5.1 (Windows 8.1) respectively. PCLs are supported in NuGet by using the portable TFM combined with the set of supported TFMs that the library targets. Using our earlier example of PCL Profile259, that would be portable-net45+netcore45+wpa81+wp8 inside a NuGet package.


    The breaking point
    There are two breaking points in this system: 1) Library authors need to update their NuGet packages to specify compatible targets, and 2) Using pre-computed contracts for PCLs is not scalable. This summer, two new runtimes, CoreCLR and .NET Native are being introduced; the desktop .NET Framework has a new 4.6 version coming out too. At the same time, a new application platform, the .NET Execution Environment (DNX), on which ASP.Net 5 is based, and a new version of the Windows “modern” platform, the Universal Windows Platform (UWP), are set to appear. It was time for a change. Adding support for UWP and DNX in combination with CoreCLR, Desktop .NET and .Net Native would be untenable with pre-computing contracts. Further, with .NET Core becoming Open Source and moving to GitHub, .NET 4.6, CoreCLR and .NET Native would support an application-local Base Class Library (BCL). The surface area available to those newer platforms was poised to explode.
    To make the issue concrete, let’s look at an example. Most people are likely familiar with the Newtonsoft.Json NuGet package for working with JSON data. The library, Json.NET, aims to support every .NET platform available. In addition to compiling the code many different times with #ifdefs to accommodate older platforms, as new platforms appear, the Json.NET author needs to update the NuGet package too. That means that as new platforms like UWP and DNX appear, despite targeting a set of contract libraries (remember, all libraries really reference contracts, not platforms), the author needs to keep updating packages to add each new platform to the supported platform list.
    What we’re experiencing here is an impedance mismatch between what the library cares about and what NuGet supports. The mismatch highlights, as fundamentally broken, a model that puts the onus on each library author to keep up-to-date with the available platforms and contract-to-platform support matrix. Libraries that would otherwise work on a target platform may not be understood as compatible by NuGet. While it is true that NuGet has a set of heuristics to accommodate additional platforms, the heuristics are also not scalable as they’re hard-coded into each NuGet client version.


    Fixing the impedance mismatch: dotnet to the rescue
    Over the past year, as “One Microsoft” has taken hold, you started to see the NuGet and .NET CLR teams work much closer together. Based on community feedback, NuGet was chosen as the de facto mechanism to deliver future versions of .NET that can run as self-contained app-local packages. In order to support the ever-increasing complexity placed upon it, NuGet had to evolve. You can read more about NuGet’s evolution to 3.0 on the NuGet team blog in posts from April 2014-November 2014.
    One of the most recent changes to NuGet, and the .NET ecosystem by extension, is support for the dotnet TFM. The meaning of dotnet wasn’t clear at first and as reflected in my earlier blog post, it seemed like it was the new target for the “new” portable .NET packages being published to NuGet and consumed by DNX and UWP. The reality isn’t quite like that but is far more interesting. Rather than dotnet representing a particular target like netcore45, dnxcore5 or net46, it really means “I’m compatible with any targets that my dependencies are, check those.” It gets NuGet out of the platform guessing game and instead walks the dependency graph.
    Practically speaking, the most common set of dependencies for any package will be its contracts – the assemblies referenced at build time. Today, with the platform-TFMs, those contracts don’t need to be listed in the NuGet package as they’re implied by the TFM. With the dotnet-based TFM, NuGet packages will have to specify their dependencies, even system ones. You can see this today with the project.json file that DNX projects use. By explicitly listing the dependencies (which may be CLR contracts), the mismatch between target framework and supported contracts is removed. Instead, each contract package declares its own support by way of its implementation.
    The way this is done is beyond the scope of this post, but you can get a sense of it by looking at the layout of the System.IO.FileSystem package below:
     
    Figure 3: System.IO.FileSystem

    In the package, you can see two assemblies in the ref folder, called design-time façades, one for .NET 4.6 and one for everything else (CoreCLR, .NET Native, etc). The surface area is identical but they function a bit differently. The façades are used at build time to enable portable assemblies which were built against contracts (System.Runtime-based) to actually resolve those types against the desktop reference assemblies (mscorlib-based). This lets an mscorlib assembly pass its version of string, that lives in mscorlib, to an API in a PCL that takes a string from System.Runtime. The same façades are used at runtime as well. This is something that should usually be considered trivia as most people need not concern themselves about the minutia.
    The package contains three implementations of the contract, one for dnxcore50, one for net46 and one for netcore50 (UWP). When I said earlier that the new .NET Core packages would only support the newer platforms, this is the how/what/why. One last thing to note in the above picture, you can see that System.IO.FileSystem itself declares many other dependencies. This is expected; with small, granular, libraries the end result is that you pull in only what you need, not the whole framework.
    None of this is to say that dotnet explicitly means the newer platforms though. Microsoft may release the existing contract assemblies, the ones currently in the Profile* directories, as NuGet packages. If they do that, then a library that “targets” dotnet could target .NET 4.5/Win8 as well. The key is that version number of each dependency would be lower than the new ones. The new .NET Core libraries, and their contracts, would all have a higher version number than the existing contracts.
    This drives home the point that what dotnet really means is “check my dependencies and I’ll run on any platform my dependencies do.”
    The fact that the new .NET Core libraries use this mechanism is actually orthogonal to dotnet’s meaning. dotnet adds its value today with existing code and libraries by changing the question of “what platforms does my library support” to “what dependencies does my library require?”
    Coming back to the earlier example of Json.NET, if it were to use dotnet, it would also declare the contracts, with its version, that it needs. It would not have to know or care about what platforms are currently supported by those contracts. In the future, if some new unicorn platform were to appear, so long as newer versions of the contracts were published that supported the unicorn platform, Json.NET would happily run there without any foreknowledge.


    Contracts or Dependencies?
    Throughout this discussion, I’ve used the terms contracts and dependencies. From the perspective of a library author or consumer, these terms are often interchangeably, but there is a difference. Contracts are one type of dependency – they are specifically crafted reference assemblies. Contracts are useful if you need to have multiple implementations of library for different platforms. Aside from the built-in system reference assemblies, the other place you see contracts are libraries that use the “bait and switch” PCL technique. The vast majority of libraries can be implemented without any platform-specific references and are thus simply dependencies. If this sounds confusing, don’t worry too much about it. This is an advanced technique that most packages don’t need to consider; the only takeaway is that whether contract or “regular” library, they both appear as dependencies in a package.


    Wrapping it all up
    At first glance, it’s easy to think “whoa, this is complicated!” Upon stepping back though, hopefully the initial complexity melts away with the newfound understanding that what’s happening here is that a layer is being removed. The layer was the platform. Up until NuGet v3 we were trying to cram a round peg into a square hole. We’d gather up an intersection of target frameworks and call it a profile. We’d calculate the contract assemblies for those and the compiler would reference those, but they stayed firmly in the background. Visual Studio intentionally hides the references behind a single .NET entry in a PCL project’s references. This lead to the platform support list being encoded within the NuGet package structure, leaving package authors scrambling to update their packages should a new platform emerge. In many cases, the existing code is already compatible but a package update was required. NuGet v3 eliminates this problem by removing the platform layer and have the ability to go “direct to the dependencies.” This is an opt-in approach for packages that use the new dotnet TFM. Packages can contain both dotnet and the existing TFMs; they are not mutually exclusive.
    The new version of .NET Core is dependent on these dependency-driven, framework agnostic packages, but the existing PCL profiles could fit into the model too. That said, dotnet doesn’t mean .NET Core any more than it means any other platform. They’re different things.

    About the author

    Oren Novotny is an architect at BlueMetal and passionate about reducing friction in the development process and enabling other developers to create portable cross-platform applications using .NET. Follow him @onovotny.   


    0 0

    Windows 10 is right around the corner, and the MVP community has been out in force helping developers, IT pros, and the next generation of game developers get ready. Here are highlights from some great recent global initiatives:
    • Windows 10 Developer Readiness – Powered by MVPs! This series of live webcasts delivered first-hand guidance on how to leverage the new Windows 10 development model. Thousands of developers in nearly 30 countries tuned in to learn the foundations of Universal App Development in Windows 10 from local MVPs presenting in their own languages. If you—or someone in your community— missed a session, you can find them all on Channel 9. Just search for “Windows 10 Readiness” and look for your country.
    • Microsoft has just wrapped up its BUILD Tour, which brought highlights from Build 2015, insights into the Windows 10 developer platform, and coding sessions for app and web developers to 26 cities around the globe. Well over 300 MVPs participated in the events, providing technical expertise at every event of the tour. Sharepoint Server MVP Doug Hemminger helped staff the “Ask the Experts” table at the Chicago event and reported, “It was a fantastic event with outstanding content and speakers. During breaks and over lunch we had a lot of attendees stop by and ask questions and strike up conversations. For example, I had an in-depth conversation with one person who was trying to evaluate and understand the various app development options for SharePoint on-premises and online. Another person I talked to wanted to understand ways he could better leverage Azure for his custom app. I think I got just as much out of the whole event as the attendees did.”
    • The final hackathons for Microsoft Imagine Windows 10 Game Jam happen this week, where the next generation of game developers have come together in cities around the world to learn about how the Windows 10 Universal App Platform unifies gaming across PCs, phones, and Xbox One. MVPs hosted 17 Game Jam events—nearly 40% of the total events—helping students kick off their game projects and connect with a great gaming community. In Seville, Windows Platform Development MVPs Javier Suárez and Josué Yeray Julián served as speakers and mentors, providing some technical sessions on the first day and supporting the 38 attendees during the hackathon that stretched over the next two days. You can see demos on Github.

    If you participated in one of these events, let us know what you thought!


    0 0

    Editor’s note: The following post was written by Microsoft Azure MVP Frank Boucher as part of our Technical Tuesday series. 

    Recently, with my team, we needed to upgrade a web solution to Azure SDK from 2.4 to 2.5.1. The upgrade was much longer and complex than expected. I decide to share what we learned so maybe others could benefit from our experience.

    The problem
    The upgrade of the code and library was not really a problem. The documentation is available on MSDN and it's easy to follow. Few breaking changes were part of the version 2.5. One is the reasons of this post: Diagnostics configuration must be applied separately after deployment. Once again, the documentation on MSDN is very helpful, once the service is deployed, we needed to execute a PowerShell script:

    However, when we executed the script, we got some error for one of the roles.
    Set-AzureServiceDiagnosticsExtension : BadRequest: The extension ID FakeVeryLongNameViewProcessorWorkerRole-PaaSDiagnostics-Production-Ext-0 is invalid.
    The problem is far simple; the generated extension ID is too long. This bug is well known and is identified in the azure-pwershell github project. The solution could be as simple as rename the role name, but this solution was not a good option in our context.

    The solution
    We decide to call directly the Azure REST API. This is where things were a bit less obvious. The documentation was present, but less accurate.

    The solution was simple in fact. First, we needed to add the extension calling the POST method. It's in this call that you pass the WadCfg.xmlfile and that we can specify a shorter ExtensionId. Then invoking another POST method, we update the deployment configuration.

    A little extra
    Our solution contains many roles. Looping through all the roles one by one was taking too long for the time frame we had when deploying in a production environment. We re-factorized our script to add all the extensions then only called the Upgrade Deployment once. This way, the execution was three times faster.

    References
    • Azure SDK for .NET 2.5 Release Notes
    • Azure SDK for .NET 2.5.1 Release Notes
    • Github - azure-powershell - issues #387
    • Service Management REST API Reference

     

    About the author

    Frank Boucher leads a dedicated team of developers in the advancement of technology in the mobile, air, and telecommunication industries at Lixar IT, an Ottawa-based software company. Frank is a sought-after speaker, author, and trusted collaborator on Microsoft Azure. Follow him @fboucheros.


    0 0

    MVPs in the Americas have been very busy creating and sharing how-to content on Microsoft’s leading learning channels, Microsoft Virtual Academy (MVA) and Channel 9!

    Microsoft Virtual Academy provides free, online training to help developers, IT and data professionals, and students learn the latest technology, build their skills, and advance their careers. Channel 9 also provides lots of great free content for developers, IT pros, professionals and students but is heavily focused on content that is meant to drive community engagement, bringing users into an ongoing conversation.  Recently Channel 9 reached out to MVPs to share content and they have responded in force!  Contributing MVP Trevor Sullivan shared his thoughts on the collaboration.

    "Channel 9 has a long history of providing high quality video content surrounding the Microsoft enterprise platform. By extending the ability for Microsoft MVP awardees to publish videos directly to the service, the amount of informational and training materials offered by Channel 9 will dramatically increase. Anyone who is interested in learning about software development, Microsoft Azure cloud services, and a variety of other topics, will have an even greater confidence in the resources that Channel 9 offers."

    MVPs in the Americas have created content in English, Spanish and Portuguese around multiple topics, including cloud, gaming, IoT, app development, productivity and a lot more. Here’s a snapshot of just some of the content that’s been created recently.

    If you’re an MVP and would like to know how you can contribute, reach out to your Community Program Manager (MVP Lead) and they’ll help you get started. You can also view a ton of great content created and delivered by MVPs for three different MVP Award Program hosted events that were uploaded to Channel 9!  Here they are:


    0 0

    Editor’s note: The following post was written by Windows Consumer Apps MVP Alberto José Escalona Ustáriz as part of our Technical Tuesday series.

    It is not hard to get used to the user interface (UI) for Skype for Business instant messaging (IM) and call functions, as it is surprisingly similar to the Skype for consumers UI.


    Instant Messaging
    If you want to use IM with any contact, you just cross the cursor over the contact avatar you want to write and automatically, multiple actions are shown in the following order: IM, Call, VideoCall, Contact Info and more.
     

    If you prefer a classic method, you can right click on your mouse and a menu with all contact actions will be displayed:


     

    The messaging UI is similar to Messenger, our old friend. However, it’s influenced by Modern UI and Skype patterns. 


     

    As you can see at the bottom of the window, the three big buttons represent the main actions in a conversation: the first one is for video calling; the second is for calling, and the third is for presentation options like share files, PowerPoint slideshows, desktop or some certain applications opened at the same time. At the top of the window, you can see beside the contact name the Add participant icon. You can even use the Skype emojis. At this point, it is almost the same Skype for consumer experience. 


     
    At the right side of the text box, you can see some discrete icons for attaching files, setting the message priority, using emojis and sending the message itself. 


     

    If you want to attach some files, you could attach a document or a picture; there are no restrictions for sharing it. 


     

    Calling and Video Calls
    In order to make a Skype call or video call, you should check the icons at the bottom of the IM window: 


     
    The first icon, video call, is used to check the video devices before starting a video call. For this article, I decide to make a call:

     
    With the presentation icon (screen), you can share your desktop or certain applications to your contacts to show them what you are doing there. You can also share PowerPoint slideshows, OneNote files, etc. 


     

    When you select the share desktop option, Skype for Business will show to your contact exactly what is displayed on your desktop. When you are ready to share, you click on the moderate button: 


     

    When your contact is watching your desktop, you can also give him/her permissions to control your mouse, like the Remote Assistance, useful when whoever controls your computer knows exactly what resources are needed and where they are located. You just let him/her to do his/her job:

     
    Your contacts can share their desktops with you, so you will able to see what they are doing and you can request control of their PCs as well: 


     
    My contact is using Windows 8.1 and he is running Lync 2013, so if your contacts use the Lync 2013 client, there are no problems if you want to use these features with them:

     

     

     

    If you want to make a call using the PBX service offered with Lync Server or you just want tocheck the audio devices during the call, you check on the calls button: 


     
    If you want to change the text font and size. You click on the dots/ellipsis button for more options, similar to the Windows apps: 


     

    If you have more than one participant on the call, you can check their names and their device permissions over the IM interface: 


     
    Last but not least, you can see your contacts information cards just by putting the pointer over the avatar; when you click on the down arrow you can expand that card and see more relevant information: 
     
     

    What is the best IM & calls feature for you? Is your company ready for this kind of communication solution?

    About the author

    From Valledupar, Columbia, Alberto José Escalona Ustáriz is a systems engineer, writer, editor, and academic advisor. He writes the blog www.simplementetech.wordpress.com. Follow him @MVP_AlbertoE

     


    0 0
  • 07/03/15--12:45: Friday Five on Windows
  • 0 0

    This week, we are looking back at MVP Community Camp 2015 for a complete roundup after the 3rd and final ComCamp Online, which has recently concluded in Asia. ComCamp is one of the biggest online events in the region and once again it became trend setter for the community events.

    With some of the great highlights including more than 3000 real time attendees, knowledge filled multi country parallel sessions, ComCamp 2015 made a mark in the history and swayed the team for much bigger initiatives in the near future. ComCamp enabled many technology enthusiasts and professionals with the great number of knowledge filled, in-depth sessions around latest Microsoft technologies solely carried by the Microsoft Most Valuable Professionals.

    ComCamp was divided into three different segments to showcase the strength of MVP Community in the entire region. ComCamp 2015 witnessed 129 MVPs from several countries across Asia delivering talks on different Microsoft Technologies such as Microsoft Azure, Office 365, SQLServer and many more.

    As many as 90 ComCamp sessions are now available through Microsoft Virtual Academy as key learning resources for Developers and IT Pros around the world. Many ComCamp 2015 sessions are also available via Channel 9 and catering technology audience, worldwide including new & emerging businesses and student fraternity on major. 

    This was truly a proud moment for entire MVP team for hosting such a great initiative, which led solidarity within the MVP Community. Thanks to all the MVPs who participated in the ComCamp sessions and shared their deep product insights with the broader audience. We are looking forward to deeper partnership and support in the coming years. 


    0 0

    Editor’s note: The following post was written by Office365 MVP Geoff Evelynas part of our Technical Tuesday series.
    One of the most compelling challenges for Office 365 is centralised monitoring. Those working in the SharePoint arena will know only too well importance of monitoring and reporting across the platforms they manage. This is not simply from a technical but also from proof of service; the availability and evolution of those services. Proof that, the team responsible for monitoring internal SharePoint resources and servers also have a level in comfort in providing likewise monitoring for Office 365. Evolution to be able to identify over time changes in the services provided so that there are no surprises to stakeholders (both technical and business). Particularly for those organisations using centralised monitoring tools such as System Centre Operations Manager for monitoring infrastructure, the issue would be how to include service provision monitoring which exists outside of the on-premise infrastructure, so that it is possible to centralise alerting, notification, messages from those external service into on-premise environment monitoring.
    From the Office365 angle, there is a wealth of monitoring availability all available from the Office 365 Dashboard available on each tenant (otherwise known as a subscription) in the Admin section of Office 365. This is a per tenant centralised provision giving service overviews and health covering all Office 365 services (Exchange, Identity Service, Portal, Rights Management, SharePoint, Skype and Yammer). There is also the Service Health page giving 30 days service level history concerning Office 365 features, including a page giving planned maintenance information.
    From a service monitor perspective, whilst this is of course presents great options, there could be some challenges. If there are a number of Office 365 tenants to manage, things can become quite unwieldy if not virtually impossible in trying to identify service status from a single point. Additionally, additional conundrums arise when faced with managing hybrid and disconnected services, such as on-premise SharePoint (including associated services), and separated Office 365 tenants for external clients and partners. The key difficulty therefore, is the ability to monitor messages, alerts and notifications from the multitude of Office365 tenants without having to hop between monitoring systems pages in each.
    System Centre Operations Manager 2012 R2 is a great monitoring tool of choice when it comes down to identifying service levels and performance for on-premise infrastructure. System Centre Operations Manager 2012 R2 crucially provides a unified management platform that covers service provider and azure environments. Service provider and Azure environments simply need to be added using 'Management Packs'. Microsoft have released an Office 365 Management Pack which provides System Centre Operations Manager 2012 R2 the ability to connect into Office 365 tenants. This is seen as a method by which the issues described above could be solved.
    This article will describe how to download the Office 365 Management Pack, import and then use the services and features in that Management Pack.
    1 A note about installing System Centre 2012
    The key pre-requisite of the Office 365 Management Pack is System Centre Operations Manager 2012 R2. This article does not altogether assume you have installed it, or even are knowledgeable about the product. Therefore, for those not sure how to install System Centre Operations Manager 2012, it is a comprehensive and depending how deeply you wish to go, quite involved. There is a lot of information out there and links to that information is given in the Summary section.
    I decided to go for a test installation, to help me write this article instead of using my full blown System Centre Operations Manager. I deployed System Centre Operations Manager into a single server VM. Information, specifics of the VM and installation guides on how to do that is also given in the Summary section.
    2 What is the Office365 Management Pack
    Basically, the pack allows you to use the administrative section of System Centre Operations Manager to add Office 365 Subscriptions. This is an important step since you would enter all the tenants that you need to monitor and is particularly useful in the dashboard. Once done, you can then use the discover services and features available for each of the subscriptions. Those features include the ability to get notified when Office 365 incidents occur, and visualise those alerts in the included dashboard.
    The management pack, once deployed communicates with the relevant subscribed tenants to obtain information from Office 365 features, services and subscription. The below screen-shot shows the monitoring dashboard which gives the sections of information available, such as service status, active and resolved incidents and the message centre. 
     
     
    3 Installing the Management Pack
    The Office 365 Management pack includes basic instructions for installation. I followed this guide to implement ten Office 365 subscriptions. Once the pack is installed into System Centre Operations Manager, you will need to ensure that you can configure System Centre Operations Manager to communicate and discover information from the Office 365 tenant.
    Essentially, what needs to be done is:
    1. Implement a special account in the Office 365 tenant which has administrative access to the tenant so that System Centre Operations Manager can subscribe to that tenant.
    2. Configure the tenant as a subscription
    3. Customise your System Centre Operations Manager Workspace so that you have a centralised dashboard
    3.1 Installing the Pack to Disc
    The following procedure describes how to install the management pack for Office 365.
    First, download the Management Pack. The link is located below.
    1: Run the MSI: 
     
     
    2: Choose the default directory. 

     
    3: Confirm the installation of the MSI:


    Note the installation directory which is where the pack will be installed to. You will need to import the pack once this procedure is completed.
    3.2 Importing the Pack into System Centre
    Once the pack is installed, you will need to visit the System Centre Operations Manager, go into the Administrative section, to install the management pack.
    1: Access System Centre Operations Manager Administration, then access the Management Packs section.
    Once in the management packs section, you will need to Import the Management Pack for Office365. The following screen-shot shows the Management Pack selection area, and the Import Management Packs section. 

      
    The Import Management Packs section is where you will select the downloaded Office 365 Management Pack. When done, the next dialog displayed is the Import Management Packs dialog. Since we downloaded the management pack, we simply take it from disk. 
     
     
    You will see an Online Catalog Connection dialog (shown below) asking if you would like to search the online catalog for dependencies. This is not required for the Office 365 management pack, so you can select NO from that dialog.

     
    Locate the downloaded Management Pack, and then you will be presented with information concerning its version, release date, status, and license terms. The Office Management pack only shows the initial version of 7.0.5115.0 (shown below).

     
    At this stage, we all ready to carry out the installation of the Office 365 Management pack. Select install to proceed with the installation. Once done, you will notice the Management Pack listed in the Administration pane.
    3.3 Adding Office 365 Subscriptions
    The key benefit of using Office 365 in System Centre Operations Manager 2012 is the ability to monitor multiple subscriptions (tenants) from one dashboard. The next thing to configure the management pack is to add all the subscriptions that needs to be monitored. This is done by accessing the Office365 Subscriptions page which is located in the Administration pane.

     
    The Office 365 subscription panel will be displayed when selecting the Office 365 option from the Administration pane. Here, you can add, edit or delete subscriptions. Each subscription requires that an Office 365 account is defined so that the subscription can be saved in System Centre Operations Manager. Each subscription requires an account is specifically set in the relevant Office 365 tenants. That account is then used to connect System Centre Operations Manager to the relevant tenant. To do this, access the Office 365 tenant admin page. Go into the Active Users page, and then add a new user, and do not assign any licenses to that user. Assign that user admin rights. Note the username and the password.
    Then, access the subscription panel by selecting the Office 365 option from the Administration pane in System Centre Operations Manager. That will display the Office 365 subscription panel. Click Add subscription to add the name of the tenant, the user name and password as created in Office 365 users. The below screen-shot shows an example:

     
    Select the NEXT button once you have entered the subscription details for the Office 365 tenant that you wish to monitor. Doing this will display the option to select a Management Server Resource Pool. Resource pools are pretty cool. They ensure the continuity of monitoring by providing multiple management servers that can take on monitoring workflows if one of the management servers becomes unavailable. You can create resource pools for specific purposes. For example, you might create a resource pool of management servers that are located in the same geographic area to provide network device monitoring. When System Centre Operations Manager is installed the default pool is the 'All Management Servers Resource Pool'.
    For the purpose of the installation of a subscription you could choose the All Management Servers Resource Pool. If you are interested in utilising Resource Pools in the future or wish to find out more information, check out the link given in the Summary section. The below screen-shot shows the selection of the All Management Servers Resource pool as part of adding a new subscription.

    Once you have selected the resource pool, select the Add Subscription button. This will then add the subscription into the Subscriptions panel. Repeat this process to add any other relevant Office 365 tenants.
     
    4 Setting Out your Workspace
    Now onto the setting up of the various dashboards, alerts, and notifications available in the Office 365 Management Pack. First, we will need to select the areas we want to monitor and add them into the Workspace. Before doing that, we will need to appraise the contents of the management pack.
    To do this, access the Monitoring section of System Centre Operations Manager, then expand the Office 365 section in the Monitoring pane.

     
    The Office 365 section shows two areas of monitoring available to you.
    · Active Alerts shows information about each incident occurred across all the subscriptions applied in the System Centre Operation Manager Office 365 subscriptions list. Active alerts will get generated for each Active or Resolved incident. This includes, for each alert, the resolution state, how old the alert is, and the priority assigned. Each alert can be expanded to show more information including full description of the incident and methods to alter who should be assigned the alert for example.
    · Office 365 Monitoring Dashboard (a screen shot is shown here) shows information for all the subscriptions and status indicator against each of the Office 365 subscriptions. There are five segments (otherwise known as widgets) in the dashboard:
    1. Subscription Health. The health (icon - red/yellow/green) and name of the subscription. Each subscription can be selected. When a subscription is selected, System Centre Operations Manager will drill down and display in the other segments status, incidents and messages to the subscription selected.
    2. Service Status. Shows the name of the services available in the subscription and for each the number of active incidents.
    3. Active Incidents. Shows the Severity (status icon - red/yellow/green), the source (subscription), the kind of incident, description, status, affected services and the ID of the incident.
    4. Resolved Incidents. Shows the same information as the Active Incidents.
    5. Message Centre. One of my favourite sections, of course :) - this shows information concerning general updates carried out in Office 365, in terms of items like What's New, Updated Features, What's around the corner and more.
     
    The point about these two areas in the monitoring section is for you to then take these into your workspace and carry out customisation to suit your way of working. As said earlier, the My Workspace section allows you to centralise the Office 365 monitoring with any other monitoring of the infrastructure into one place which is easy to locate.
    To do this, you will need to right-click the area in the monitoring pane to display the context menu (see the below screen-shot). From that menu select 'Add to My Workspace'. Once done, you will need to enter a title for the area, and a group that it will appear in.

     

    I went and did this for the Active Alerts and the Office 365 Monitoring Dashboard, and then I visited the My Workspace section (screen-shot below).


    The My Workspace section shows the areas of monitoring that you have decided you want to monitor. On mine, I want to monitor SharePoint 2013 on-premise, Windows Computers, Service State and Office 365. Let’s take a look at the two areas in question, Active Alerts, and the Monitoring Dashboard.
     
    4.1 Active Alerts
    Office 365 Active Alerts allows you to get a central view of all messages concerning the incidents on all the subscriptions in one window. You can personalise the view and select columns you wish displayed. You can also the sorting and grouping. You can also rename the title of the view so that you can recognise it easily in the other areas that you have listed in the Workspace area. For example, SharePoint 2013 Management Pack has Active Alerts section in it, so it probably is important to rename so it’s clear which is which when displayed with the Office 365 Management Pack Active Alerts section. The below screen-shot shows the Workspace section, with the areas I want to monitor. The Active Alerts from Office 365 Management Pack has been added and customised. The title has been renamed, and I have added the description field to the list. I have sorted by Source then Severity. That means I can see messages grouped by the subscription (the Office 365 Tenant) and the severity (information, critical, etc.)

     
    The following screen-shot shows two sources (two Office 365 tenants) showing clearly that the authorization to the second has failed (most likely a password incorrectly entered). The key thing here is the fact that you can see the critical messages against each source.
     

    Each alert displayed can be drilled into to obtain further information. For example, on the first item displayed in the above screen-shot (Resolved Incident MO25200) I can bring up the Alert Context to understand more about the incident and what was done to resolve the issue. Of course, this information could be obtained on the portal, but doing this in the Operations Manager centrally is definitely more productive!
     

    Additionally, each alert can give me more information about how to solve the issue. For example, one of my tenants was not able to authorize against System Centre. By bringing up the Product Knowledge on the alert I was able to see ways in which I could solve the issue. 18

    Each alert can be controlled in terms of management. You can assign individuals and Ticket IDs to the alert. You can also, once the alert issue has been resolved mark it as Closed. This will remove the alert so that you can concentrate on the unresolved incidents.

     
    4.2 The Monitoring Dashboard
    The Monitoring Dashboard centralises all active, resolved, service status and all messages into one dashboard. The great aspect about the dashboard is the ability to filter messages.
     


    5 Summary
    System Center 2012 Operations Manager R2 is designed to help configure and manage applications, services, computers, and VMs in mid-sized to large enterprises. The Office 365 Management Pack adds the ability for those using this tool to extend the monitoring into one or more Office 365 under their control. This article shows just a fraction of the ability that there is using the Office 365 Management Pack. Whilst on my voyage of discovery I noted other very useful items which in the context of service delivery are vitally important.
    · Reporting - the ability to view over time aspects such as availability, uptime, configuration changes.
    · Custom field manipulation to store more information concerning Alerts (help the monitoring system become more of a helpdesk)
    · Override some of the ways the alert surfaces, such as priority, severity, etc.
     
    I will be writing more about these features and options in further articles to explain more how System Centre 2012 R2 Operations Manager can be tailored and linked, and particularly where in the land of support they can be extremely useful.
     
    5.1 More Information
    There are places you should visit to get more information concerning System Centre 2012 R2 and the Office 365 Management Pack.
    System Centre Operations Manager R2.
    Gives overviews, features, purchasing, resources and the ability to evaluate System Centre Operations Manager 2012 R2.
    http://www.microsoft.com/en-us/server-cloud/products/system-center-2012-r2/Overview.aspx
    System Centre Operations Manager Management Pack for Office 365
    Gives details, basic installation instructions, system requirements.
    http://www.microsoft.com/en-gb/download/details.aspx?id=43708
    How to work with Resource Pools in System Centre Operations Manager
    A resource pool allows to you define collections of management servers used to distribute work amongst themselves including continuity.
    https://technet.microsoft.com/en-gb/library/hh230706.aspx

    About the author

    With more than 30 years' experience in information technology and more than 10 years’ experience in SharePoint covering development, design and implementation. Geoff works in SharePoint information systems design, architecture, implementation, planning, governance and automation, all in the land of SharePoint service delivery. Geoff is a multiple author, a Fellow of the Institute of the Analysts and Programmers, a Fellow of  the Institute of Computer Technology, a Member of the Chartered Institute for IT and Engineering Technology, a Prince 2  Practitioner, and an MVP Mentor.


    0 0

    Organized by Microsoft Italy Audience Marketing Manager Sara Campagna and her team, the first Italian MVP and Community Lead Summit took place on July 8 and 9 in Desenzano sul Garda, a picturesque venue with a lake view. Aimed to kick off FY16, the event targeted two audiences and has been divided into two days to cover plenty of sessions for both targets: MVPs on the first day and adjacent communities on the last day.

    The first day witnessed the participation of 39 MVPs from different expertise and various parts of Italy. The opening keynote by Roberto Andreoli, the interesting and intensive session of Q&A with Fabio Santini and the presentation of the technical evangelism team -Paola Presutto, Erica Barone, Jessica Tibaldi and Lorenzo Barbieri, played a great role in reaching a broad audience. It was also a great opportunity for Community Program Manager Cristina Gonzalez Herrero and Program Coordinator Marjorie Di Clemente, to speak about the program and the new opportunities born from the integration to DX. Cristina’s presentation gave an insightful update about the new synergies that the MVPs will be able to create with the DX team. The day finished with a nice dinner in the “Invernal Garden” with speeches of Fabio Santini and also with the interesting speech of Carlo Mauceli, the CTO of Microsoft Italy. The evening ended late at night, offering the participants a great networking opportunity. 

    The second day consisted the audience consisted of 19 community leads, 15 of whom are also MVPs. The day was focused on community building and expansion as each community had the opportunity to present itself to the other and to DX team to provide their community activities and how both parties could collaborate.

    The initial comments and positive reactions published on the social media were very appreciative as attendees stated; “Great event, great place”, I’m excited that the program MVP is now part of DX”, and “I’m looking forward talking part in this year’s DX initiatives.” We can say that this first event of a series of upcoming ones was a great success and everybody expressed their hope that it will be repeated next year!

     


    0 0

    Editor’s note: The following post was written by Windows IT Pro MVP Terry Lau as part of our Technical Tuesday series.

    I'm going to talk about wipe options and un-enrollment scenarios of Mobile Device Management (MDM) on Office 365. Many Mobile Device Management platforms support "Full wipe" and "Selective wipe" for administration.
     
    Wipe options on Mobile Device Management on Office 365

    What are "Full wipe" and "Selective wipe" on Mobile Device Management?

    "Full wipe" - Factory reset of the managed devices and the device record will be removed on the Mobile Device Management console (it depends on which product). All data, including personal data, is deleted.
     
    "Selective wipe" - Remove data, applications and management profile which are published by your organization. User’s personal data isn't affected.
     
    To remove wipe devices, Full or Selective, make sure devices connect to data network. If not, the wipe command cannot be applied to devices.
     
    Remark: When administrators press "Full wipe" or "Selective wipe," the device record will be deleted automatically on Mobile Device Management of Office 365, whether the device wiped or not. 


     
     
     
     
     
    Except for performing "Full wipe" or "Selective wipe" on Mobile Device Management of Office 365 by administrators, users still can perform "Full wipe" on "Mobile devices" of Outlook. 
     
     
     
    However, the device record still remains on Mobile Device Management of Office 365. 
     
     
     
    If the user re-enrolls MDM on the wiped device again, two device records for the user are left on Mobile Device Management of Office 365 and "Comp Portal." 
     
     
     
     
     
    It's difficult for administrators to identify which is the currently managed device. On the user side, it can be based on the task bar of Comp Portal which is the old device record. 
     
     
     
    There are only three buttons on the old device record. Press "Remove" to delete the device record.
     
    Microsoft should update the portal to display the last check-in time of the device records on Mobile Device Management of Office 365.
     
    Remark: Based on my testing, users need to click "Comp Portal" to apply "Selective wipe" command to the device on Windows Phone 8.1. According to Wipe a mobile device in the Office 365 article, the command should be applied to the device immediately. Furthermore, I pressed "Selective wipe" for my Windows Phone 8.1 but the device still can send and receive email of the user account of Office 365 by ActiveSync after applying the wipe command.
     
    Un-enrollment on Mobile Device Management of Office 365
    There are two methods to un-enroll MDM on user devices. The first one is click "Remove" on the device of "Comp Portal." 
     
     
     
    However, the device is required to be connected to Internet. 
     
     
     
    Another method to un-enroll on the device is to click “Settings > General > Device Management > Management Profile > Remove Management." 
     
     
     
    After removing the device profile on Mobile Device Management, the device synchronizes the un-enrollment information to Office 365 if data network is available. Then,the device record will be deleted on Mobile Device Management of Office 365.

    About the author

    Terry Lau is an operations consultant in a system integrator company. He's responsible for providing design, implementation, configuration and support for products which are related to Microsoft solutions like Virtualization, Enterprise Mobility Management and so on.

    Terry is a blogger. He likes to share IT knowledge on his blog. His blog focuses on Microsoft technologies like Hyper-V, Enterprise Mobility Management, Microsoft Azure and so on.

     

     

     


    0 0

    MVPs around the world are coming together today for the official launch of Visual Studio 2015, a product to which many of them contributed their ideas and feedback. You can join them now by going  to the Visual Studio 2015 Final Release Event! The action starts at 8:30am PDT.


    Many MVPs have closely watched the progress of Visual Studio 2015 for some time. Last year, the MVP community began providing their expert perspectives to Microsoft product teams. And at last year’s MVP Global Summit in November, it was an important topic covered in private sessions with the community.


    At the Summit, S. Somasegar, corporate vice president of Microsoft’s Developer Division, noted the value MVPs provide to the development and adoption of Microsoft technologies, explaining, “I see immense value in our MVP community. These are our best and most passionate customers, those who take it upon themselves to learn about everything we’re doing, to provide invaluable feedback, and to then help the rest of the world discover and make the most of our technologies. I view our MVPs as ambassadors to the technologies and work that we do at Microsoft. MVPs end up being a trusted connection for the engineering teams, helping both to guide the direction of our products and to help other customers successfully utilize Microsoft platforms and tools. I personally take any opportunity I get to spend time with MVPs around the world and to hear what they have to say: I find their perceptions of what we’re doing, how we’re doing it, and how we stack against the competition to be quite illuminating and insightful. In my experience, MVPs are both vocal and honest in their feedback to us – both where we are doing well, and more importantly where we should be doing better. That is super valuable.”


    You can learn about the new features of Visual Studio 2015 at Connect(“on-demand”) on Microsoft’s Channel 9.


    And you can download Visual Studio 2015 here.


    0 0

    Editor’s note: The following post was written by Office 365 MVP Cindy Meister as part of our Technical Tuesday series.

    Those of us who’ve been developing with the Office applications for Windows since they were introduced in the early 1990’s are no strangers to programming language changes. The transition in version 97 from the UI-oriented WordBasic / Excel XLM / AccessBasic languages to the more object-oriented VBA meant not only learning a new language, but making a significant paradigm shift: Instead of thinking about how the user interface behaves the developer needed to learn to think in terms of objects, with their properties and methods, as well as events. On the whole, I believe most of us ended up embracing the new technology (and certainly the IDE!), although not everyone made the transition at the same speed or with the same amount of enthusiasm!


    Then VSTO was introduced with Office 2003 and Office developers were confronted with the question whether to familiarize themselves with the .NET Framework and VB.NET and/or C#. (In hindsight, it turns out this wasn’t the real future of Office development. Cause for those who didn’t make the effort to feel smug and those who did to feel somewhat betrayed.)


    A few years later, the Office Open XML file format was introduced in version 2007, providing the possibility of creating and editing Office documents directly, independent of the host Office application. This is especially interesting for those who need to process documents server-side, want to increase processing speed or avoid conflicts with the user working in the application interface. In order to take advantage of the new file format, the developer requires knowledge of XML and ZIP packaging in his programming language of choice or, in order to leverage the Open XML SDK, knowledge of VB.NET or C#. (So maybe the effort to learn these was worth it, after all?)


    With Office 2013 another major paradigm shift took place with the introduction of “Apps for Office” (recently renamed “Web Add-ins” – see more). At that moment, Office expanded from a desktop suite to the Internet and mobile environments, at the same time adapting JavaScript as the programming language – the language of the Internet. And the programming language is not the only change: a new set of APIs has been (and is continually) being developed.


    The big difference compared to the change for Office 97 is that the old language – VBA – is not being replaced. It’s still there, with all the functionality, but as before it runs only in the desktop environment. VBA (as well as COM Add-ins and VSTO) is also still in the Office Development Roadmap, so legacy applications will continue to work – investments won’t be lost.


    The biggest drawback of these original “Web Add-in” APIs is that they lack most of the capabilities required to work effectively with Office documents and their applications. They are, however, being expanded for Office 2016 and should offer many more possibilities.
    With the advent of JavaScript, developers are no longer locked to Microsoft IDEs and will have their choice of developer tools. Distribution is not as simple as handing over an Office document containing macro code, as can be done with VBA. But on the other hand it is more straightforward than for a VSTO solution. Updating the code across multiple users is certainly simpler.


    This change is perhaps more compelling than the introduction of VSTO, which due to deployment complexities never really caught on outside of corporations. While VBA is still not endangered, due to the huge amount of legacy code and investment in legacy applications, it is limited to the desktop environment. But how easy is the transition from VBA to JavaScript?


    There are two factors that need to be considered when comparing programming languages: the language syntax and the language “philosophy” – the underlying concepts. Syntactically, JavaScript is closer to C# than the VB languages, which raises the barrier for VBA coders. Conceptually, however, the VBA developer will generally feel more comfortable with how JavaScript works than C# developers.


    My series of blog articles will attempt to provide an introduction to JavaScript for the VBA developer, in the context of the Web Add-in APIs. I’m no JavaScript expert and have no intention of trying to explain JavaScript in great depth – there are many books and on-line articles available for those who want to really get into it. My sole purpose is to provide a resource for VBA developers who are interested in “testing the waters” and find available resources hazy, dense, or downright confusing! (Yes, I identify with your pain – BTDT.)


    Between syntax and concepts it seems to me easier to begin with syntax. Discussion of code examples won’t make a lot of sense if you can’t understand what you’re looking at! So the first follow-up articles to this post will deal mainly with JavaScript syntax with some conceptual ideas thrown in where appropriate.

    About the author


    Cindy specializes in automation of Microsoft Office applications and, most recently, in the Office Open XML file formats and Open XML SDK. She has worked with all releases of Microsoft Office applications since the early 1990's and has been an MVP since 1996. You can find her on the MSDN forums for VSTO, Word, Open XML SDK and General Office Development where she's a moderator.


    0 0
  • 07/22/15--13:36: Windows Store Wednesday!
  • Featured apps available in the Windows Store from Microsoft MVPs!

     

    Earthquake Watch: Get information on the latest earthquakes from the US, Chilean and Mexican seismologic national services.

    Frases Expresivas (Expressive Sentences): Let the world know what mood you are in, in Spanish.

    Diccionar RAE: Offers definitions from these dictionaries: 1) Spanish dictionary of the SAR. 2) Pan-Hispanic Dictionary of doubts. 3) Essential Dictionary.

     


    0 0
    0 0

    PowerShell, to DevOps and Beyond

    The PowerShell community is casting off for a conference on the water. PowerShell MVP Doug Finke is organizing a seven-day cruise to the Bahamas from New York City, scheduled June 18th - 25th 2016. Posh Cruise will deliver presentations over the course of three days at sea and the Call for Speakers is now open! Submissions are floating in and speakers and attendees are booking cabins. Twelve time PowerShell MVP Don Jones will be hosting a live webinar with the organizers and the travel agency to answer questions for anyone to listen in or check out the recording. Find the latest here:

    Twitter: https://twitter.com/PoshCruise

    Facebook: https://www.facebook.com/poshcruise

    PowerShell Community Celebrates Milestones!

    PowerShell MVP Boe Prox celebrated the one million views mark last month on hisLearn-PowerShell.netblog! On the ever popular PowerShell.Org, Don Jones tells us how the 2015 Scripting Games have been re-imagined as a monthly puzzle with a July puzzle happening right now. Stay tuned for a celebrity post by Boe coming out at the end of the month where he will share his take on the puzzle.

    PowerScripting Podcast just surpassed the one million mark for total downloads this weekPowerShell MVPs Jon Walz and Hal Rottenberg launched their first episode back in March 2007, shortly after PowerShell’s introduction to the world. Their weekly podcast show is recorded before a live chatroom audience and delivers a fun and engaging way to have conversations with expert guest speakers and members of the community, all helping each other share and learn about new techniques, products, features and more.

    More PowerShell News, Events and Resources 

    There are also so many more MVPs doing amazing things in the PowerShell communities! We invite you to visit MVP Award and learn more about what our PowerShell MVPs from across the world  are doing in the technical communities!


    0 0

    Editor’s note: The following post was written by SQL Server MVP Warner Chaves as part of our Technical Tuesday series.

    The cloud is ready for SQL Server enterprise-grade workloads. This is not my opinion but a simple fact that comes from helping multiple clients in different industries move very intensive SQL Servers to Azure. A couple of years ago the guidance was to focus on moving your Development or Test servers but now a days with proper planning and configuration you can deploy a production SQL Server confidently and easily.

    In this article I’m going to focus on the two pillars of a SQL Server deployment: performance and availability. Of course we want our SQL Servers to be fast but we also want them to be highly available and ready to serve applications any time of the day, any day of the year. And we want to do these things in a cost efficient way. I will discuss the different options Azure offers to achieve these goals.

    Building for Performance
    The exact virtual machine size that you’ll need depends on your SQL Server size and load, however there are some best practices and recommendations that apply to any VM that you want to optimize for performance.

    First, for high performance workloads you want to be looking at VM sizes for Microsoft Azure Virtual Machines. In my opinion, for high performance workloads the options are either a DS machine or a G series machine. The pick between the two of them right now comes down to differences in the CPU models, the amount of RAM per core and the type of storage allowed as we’ll see below.

    Compute and Memory
    On the compute front, the DS machines come with Intel Xeon E5-2660 2.2 GHz processors that are 60% faster than the previous CPU models used on the A tier. The G ones however come equipped with more powerful Intel Xeon E5-2698 2.3 GHz processors for more compute demanding workloads. Most SQL Server workloads are more IO bound than CPU bound but if yours doesn’t follow this rule and is heavier on CPU then a G series model could be better.

    The amount of RAM per core also changes between the DS and G series with the G series coming equipped with more RAM per core. For example, the following 3 machines all come with 4 cores but differ based on the RAM provided:
    • DS3: 14GB of RAM.
    • DS12: 28GB of RAM.
    • G2: 56GB of RAM.

    The more RAM the more expensive the machine is, so you need to make a choice that makes sense for the size of your databases and your workload. The pricing details for the VM sizes can be found on pricing page, here.

    Storage
    Both DS and G series come with a temporary SSD drive attached to the VM with a size dependent on the specific VM model you choose. The G series come with larger temporary SSD drives at a higher cost. Since this drive is temporary it should only be used for storing a database like Tempdb on SQL Server. Or if using SQL Server 2014 or up, you can deploy a Buffer Pool Extension file on this drive.

    For permanent storage there are two options: Page Blob storage and the recently introduced SSD Premium Storage.  Page blob storage volumes provide performance of approximately 500 IOPS, up to 60 MB/sec and variable latency. Depending on your VM model you’ll be able to attach a variable number of volumes to it as well. For example, a D3 4 core machine allows attaching up to 8 of these volumes, whereas a 32 core G5 allows up to 64. Using Windows Server Storage Spaces you can also stripe these volumes to provide higher throughput to a single Windows disk. Page blob storage is paid by the amount used and the amount of IO transactions.

    Premium storage is more recent and right now only available on the DS series VMs. This type of storage is SSD-based and can sustain higher IOPS and throughput with lower latency than the classic Page Blob storage. Premium volumes come in 3 different performance levels (July 2015):

    The amount of these volumes that you can attach to a single VM goes from 2 on a DS1 all the way to 32 on a DS14. You can stripe these as well to present them as one disk, though keep in mind that specific VM sizes will have a limit to the amount of IOPS and MB/sec that they can go up to. You can see those limits here: https://azure.microsoft.com/en-us/documentation/articles/storage-premium-storage-preview-portal/.
    Premium storage unfortunately is not yet available for all regions so this could be a big factor in your decision. At the time I’m writing this (July 2015) these are the regions where you can create a DS series machine: West US, East US 2, West Europe, South East Asia, Japan West. Microsoft is constantly adding more capabilities to each region so make sure to check the Azure portal for the latest information.
    Also note that the published Best Practices from Microsoft for data disks is to have no caching for Page Blob disks and Read-Only caching for Premium Storage disks. Refer to this article for the full details: https://msdn.microsoft.com/en-us/library/azure/dn133149.aspx.
    Putting all the information together, here are two example Virtual Machine configurations:

    Configuring for Availability
    SQL Server includes several High Availability and Disaster Recovery solutions right out-of-the-box that work well within Azure and provide different levels of resilience to suit different Recovery Time Objective (RTO) and Recovery Point Objective (RPO) requirements. These solutions are log shipping, database mirroring and AlwaysOn availability groups.

    Regarding storage redundancy, locally redundant storage should be used so Azure will keep 3 copies of your Virtual Hard Disks. Geo-redundant storage should not be used for SQL Server because write-ordering is not guaranteed with this option. For geographic redundancy it’s recommended to use a SQL Server technology like the ones mentioned.

    For new Enterprise grade deployments the best solution is to go with SQL Server 2014 Enterprise and AlwaysOn Availability Groups. For example, support for multi-subnet clusters in Windows Server 2012 and above means we can deploy two nodes to provide high availability on one Azure region and then another third node for disaster recovery on a second region.

    One concept that is critical to understand in Azure is Availability Sets. An Availability Set is a logical grouping of virtual machines to maximize availability in the event of planned or unplanned downtime of the Azure physical host. Virtual machines inside an Availability Set are assigned and Update Domain and a Fault Domain and these govern the rules as to where the Virtual Machine is located in case of planned or unplanned maintenance. For example, if we have two SQL Servers in a Windows cluster, we can have them in different Update and Fault domains so that if planned or unplanned maintenance happens for one machine then the other one will not be affected and will be able to take over.

    This is what the solution would look like: 


     
    This is a 3 node Windows Server 2012 Failover Cluster called SQLCLUSTER running SQL Server 2014 Enterprise and using AlwaysOn Availability Groups to provide the redundancy and data synchronization capabilities. The Primary site has two nodes and a file-share witness that are part of the same Availability Set for fast local failover. There is also a third node that on a second Azure region that serves as the Disaster Recovery location. SQL1 replicates synchronously to SQL2 and provides automatic failover while SQL3 is being replicated to asynchronously and is able to do a manual failover if a disaster strikes.

    Final thoughts
    Azure has now matured to the point where critical SQL Server workloads can be designed and implemented on the platform with ease. Both performance and availability requirements can be met with the latest offerings like Premium Storage and SQL Server technologies like AlwaysOn Availability Groups.

    The key to a successful deployment is in documenting the performance and availability requirements clearly and then comparing against the different virtual machine and configuration options mentioned in this article. If your organization is thinking of leveraging the cloud for efficiency and velocity, SQL Server can definitely go there and you can make sure that it does so without compromising any performance or availability.

    About the author

    Warner is a SQL Server MCM and SQL Server Principal Consultant at Pythian, a global Canada-based company specialized in data and infrastructure services. A brief stint in .NET programming led to his early DBA formation working for enterprise customers in Hewlett-Packard ITO organization. From there he transitioned to his current position at Pythian, managing multiple customers and instances in many versions and industries while leading a highly talented team of SQL Server DBAs.


    0 0

    Windows Platform Development MVP Michael Crump @mbcrump: A Guided Tour of Windows 10 IoT Core


    Directory Services MVP Zubair Alexander @ZubairAlexander: Windows 10 Upgrade is Now Available – Learn How to Get Your Free Copy


    Enterprise Client Management MVP Colin SmithTen for Ten


    Windows Experience MVP Mike Hall: Windows 10 at Home


    Windows Platform Development MVP Mark Schramm @markbschramm: Developers Start Your Engines


older | 1 | .... | 34 | 35 | (Page 36) | 37 | 38 | .... | 40 | newer