Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels

Channel Catalog

Channel Description:

Independent Experts. Real World Answers.

older | 1 | .... | 35 | 36 | (Page 37) | 38 | 39 | 40 | newer

    0 0

    Windows 10 was only recently released, and there already are millions of users who have on boarded it and upgraded their world—backed up by the contributions, enthusiasm and support of the global MVP community.

    In Asia Pacific, Greater China and Japan, MVPs whole heartedly participated in various launch events and parties hosted across the region including Beijing, New Delhi, Singapore, Sydney and Tokyo, engaging with hundreds of Windows Insiders and fans.

    Many of these events showcased and demonstrated a range of features—from Xbox to first party and partner devices as well as Internet of Things.



    MVPs around the region also contributed immensely through their personal blogs around Windows 10, covering everything from upgrade options, features overview, tips and tricks, development, troubleshooting and many more.

    Around 10 MVPs from the Indian subcontinent contributed to more than 350 blog posts, tallying more than 1.5 million impressions. Many of those blog posts are regularly shared through the official MVP Award India MVP Twitter handle @IndiaMVP, where followers keep up with the MVP India community.

    In Japan, more than 400 blog posts were published by 36 MVPs focusing on tips and tricks, as well as troubleshooting Windows 10 as part of the Windows 10 Premium Support Program. The Windows 10 Premium Support Program has gained 1.6 million page views so far.

    And this is just the scratching the surface: Windows 10 Game Jams recently concluded and Windows 10 developer hackathons are going on right now. We’ll keep you posted on how MVPs continue to help Microsoft customers around the world make the most of Windows 10.

    0 0

    Editor’s note: The following post was written by Office 365 MVP Nuno Silva as part of our Technical Tuesday series.

    Mobile Device Management for Office 365 (MDM for Office 365) integrated with Azure Active Directory is an enterprise-level identity and access management cloud solution. MDM for Office 365, built on top of the core offering of Office 365, provides a robust set of capabilities to empower enterprises with more demanding needs on identity and access management of their mobile devices.

    MDM for Office 365 offers a new layer of capabilities that will raise the security of the organization devices, with the era of bring your own device (BYOD), each organization may consider using these new technologies that help their business data to be secure.

    The new MDM capabilities are:

    • Help secure and manage corporate resources with conditional access—Apply security policies on devices that connect to Office 365 to ensure that Office 365 corporate email and documents are synchronized only on phones and tablets that are managed by your company.
    • Apply mobile device settings—Set and manage security policies such as device-level pin lock and jailbreak detection on devices to help prevent unauthorized users from accessing corporate email and data when a device is lost or stolen.
    • Perform a selective wipe of Office 365 data—Remove Office 365 corporate data from a device when an employee leaves your organization, while leaving their personal data, photos, and apps intact.
    • Preserve Office 365 productivity experience—Unlike third-party MDM solutions that have replaced productivity apps with restrictive all-in-one apps for corporate email, calendars and documents, MDM for Office 365 is built directly into the productivity apps your employees know and love. You can set access policies to help secure company data while keeping employees productive.
     Manage policies with ease—Administer mobile device policies directly from within the Office 365 administration portal, through an easy to use interface with wizard-based set up. View reports on which devices are connected to Office 365 and identify devices that have been blocked due to non-compliance.

    These capabilities will be included with all Office 365 commercial subscriptions, including Business, Enterprise, EDU and Government plans.

    Advanced mobile device and application management with Microsoft Intune
    The capabilities built in to MDM for Office 365 are powered by Microsoft Intune, Microsoft’s comprehensive device management and app management solution for phones, tablets and PCs. Organizations that need management and protection beyond what’s included in Office 365 can subscribe to Intune and get additional device and app management capabilities, including:
    • Mobile application management—Enable your workforce to securely access corporate information using Office mobile apps while protecting your company’s data by restricting actions such as copy/cut/paste/save in your managed app ecosystem. Intune also extends these capabilities to existing line-of-business apps with the Intune app wrapper and enables secure viewing of content using the Managed Browser, PDF Viewer, AV Player and Image Viewer apps.
    • Manage devices from the cloud, or integrate with existing on-premises System Center Configuration Manager—Intune can manage devices from the cloud, without any infrastructure requirements, or Intune can be connected to System Center 2012 Configuration Manager to manage all of your devices including PCs, Macs, Unix/Linux Servers and mobile devices from a single management console.
    • Comprehensive mobile device management—Deploy certificates, Wi-Fi, VPN and email profiles automatically once a device is enrolled, enabling users to access corporate resources with the appropriate security configurations. You also have the ability to bulk enroll corporate devices to set policies and deploy applications on a large scale and can provide your users with a self-service Company Portal where they can enroll their own devices and install corporate apps.

    The built-in Mobile Device Management for Office 365 can help you secure and manage your users' mobile devices like iPhones, iPads, Androids, and Windows phones. You can view an inventory of all enrolled devices that connect to your organization, create and manage device security policies, remotely wipe a device, and view detailed device reports. To get started, complete the following article to activate and set up Mobile Device Management for Office 365.

    Here is a summary of some steps to configure:

    • Go to Mobile devices and press “Get Started”




    • After some moments the Dashboard will be like this 

    • Then you have to configure your DNS, press Manage Settings and then the setup section of each required step of your desire
    Note: If you do not have iOS devices you do not need to configure that section 

    • After configuration of DNS entries you will need to go to Compliance Center to configure the policy to the devices 

    • Press + sign to create a device policy and give a name and description 


    • Select the options of desired configuration of the security policy

    • After the configuration of the desired policy the dashboard will show the details

    Note: Wait a few moments for the status be “On”.

    Now after the configuration you will need to follow the steps on the devices to authorize access.

    The enrollment process on Windows phone will be like these screenshots:

    In Windows 10, the screenshots are:

    Note: Users with Android or iOS devices are required to install the Company Portal app as part of the enrollment process.

    After all these steps are configured, your organization is ready to leverage security with features of MDM for Office 365.

    You now have configured the MDM for Office 365 and have the options on Mobile Devices Dashboard to manage your devices.

    You can also see the devices of a user on your Azure AD.
    Note: This is the only option if you want to delete a device that is not available on Dashboard of Mobile Devices on Office 365 portal.

    Summary - Mobile Device Management for Office 365 (MDM Office 365) helps reduce organizational risk and enable regulatory compliance by providing an extra layer of security on a user’s devices in addition to a user’s account credentials. For that purpose, it leverages for additional security on what devices can connect to the service with policies that can leverage to the next level of security. A solution that can be complemented with Intune to raise their level of security am compliance.

    MDM for Office 365 is becoming the new way for organizations to secure access and how businesses ensure trust in a multi-device, mobile, cloud world.

    Final Note:

    Microsoft is currently in the process of updating the service with Windows 10 to connect your devices to Azure AAD. This will mean that once these updates are available, you can manage your Windows 10 devices with MDM for Office 365

    Introduction to Windows 10 and Azure Active Directory

    With Windows 10 features for configuring and deploying corp-owned Windows devices your user can Domain Join your AAD. With Azure AD Join the user registers devices in the directory so that they are visible and can be managed by an organization. But with Azure AD Join, Windows authenticates directly to Azure AD, no Domain Controller needed.

    The scenarios that organizations may use are:
    • Users will be able to join their work Windows 10 devices directly to Azure Active Directory and sign into Windows using their Azure Active Directory account and password – while still having single sign-on access to Office 365 and to on-premises services that leverage Active Directory authentication. The devices can be automatically enrolled into a mobile device management (MDM) service at the same time.
    • Users will be able to add their work account to their personal (BYOD) Windows 10 devices, establishing a link between Windows and their work account managed in Azure Active Directory, which will provide single sign-on access to the organization’s services. The devices may also be automatically enrolled in mobile device management (MDM).
    • Traditional PC devices, joined to an existing Active Directory domain, will have single sign-on access to cloud-based services like Office 365, the Windows Store, or any other Azure Active Directory-aware application. Windows 10 understands that the Active Directory account is associated with a synchronized Azure Active Directory account. (Device management continues to be provided using Active Directory Group Policy and System Center Configuration Manager.)

    These scenarios provide organizations a great deal of flexibility – choose the scenario that makes the most sense for each device.

    If your organizations need more features than MDM for Office 365, you can use Microsoft Intune (see comparison here). Microsoft Intune is a part of the Microsoft Enterprise Mobility Suite, and has additional management capabilities on devices and applications. With Microsoft Intune your organization is more secure.

    Support Links:

    Overview of MDM for Office 365

    MDM Office 365 
    Explore the built-in Mobile Device Management (MDM) feature for Office 365
    FAQ’s for MDM for Office 365

    About the author

    Nuno is manager at GFI  and a specialist in Office 365, with a focus on Exchange, Virtualization, Azure and System Center. With more than 17 years’ experience in Datacenter Architectures, and a masters degree in Information Technologies, Nuno has 30+ certifications (MCSE, MCITP, MCSA and MCTS among others). Experienced in enterprise environments, he has worked in a range of several industries. He also has assisted Microsoft in the development of workshops and special events and case studies, and has served as a speaker at several Microsoft events. Follow him on Twitter @nunoariassilva.

    0 0

    Germany celebrated the launch of Windows 10 with a Windows 10 Fan Celebration Party that took place on July 29 at the Microsoft Office in Berlin. The German Windows Business Group asked the DACH MVP team to support them by inviting 15 MVPs to this exclusive event. The greater technical communities and fans have been the main audience of this event as they had the chance to attend the Open House from 10 A.M. until 3 P.M. The party started with a warm welcome speech from Germany general manager Alexander Stüger, followed with a speech by public relations lead Thomas Mickeleit. As they finished, Windows business group lead Oliver Gürtler took the stage to deliver a live demo on Windows 10, explaining the new features and advantages of updating to Windows 10. At the end of his session, he thanked Windows Insiders for their support and he expressed his thanks to all MVPs and asked MVPs Peter Nowak, Lars Heinrich, and MSP Alexander Faust to join him on the stage for a quick chat. They were asked what personal advantages and benefits they have experienced from using Windows 10 for their day-to-day business as developers. Peter Nowak also attended the Open House event as an Ask the Expert volunteer to answer fans’ questions. After the press conference, they held three different breakout sessions for the press including an IoT session and a hardware session, where all hardware partners attended to answer questions on their hardware running on Windows 10. At 6 P.M., with the attendance of 500 exclusive attendees, the evening event started with a vibrant atmosphere providing a ground for technology, networking, and fun.

    You can watch the event video here.



    0 0

    Editor’s note:
    The following post was written by Skype for Business MVP Christophe Boucetta as part of our Technical Tuesday series

    Announced during the last Microsoft Ignite, Skype Meeting Broadcast is presently available in preview.

    Skype Meeting Broadcast allows organizations to host events for up to 10,000 people, and have them view the meeting online live via any browser. This feature enables you to schedule, produce, and broadcastmeetings or events to large online audiences. Attendees can join from anywhere on any device, and engage in the meeting.

    Skype Meeting Broadcast is an online service that is part of Office 365.

    If your organization has Skype for Business Server on-premises and you would like to use Skype Meeting Broadcast, you need an online tenant and running hybrid configuration.

    At the high level overview, every Skype Meeting Broadcast is just a Skype for Business online meeting running in a dedicated meeting pool created specifically for the use of broadcast meeting.

    Introduce a new role, and the producer controls what the audience sees during the broadcast. With the broadcast meeting, the audience only sees what the producer shows. The producer is the director of the broadcast. There can be multiple presenters in a single broadcast, with the audio and video of an individual selected by the producer.

    Attendees will get the media stream that is close to them geographically, which reduces the network footprint.

    Microsoft supports any browser, such as those for a phone, a tablet or a PC, as long it is an HTML-device compatibility browser.


    The producer does not need to learn another tool, and just uses the Skype for Business 2015 client with June PU. Authenticate to a broadcast URL as a team member. Select sign in as event team member. It looks just like another Skype for Business meeting. Then add video or some content to get started.

    The Skype Meeting Broadcast portal is where you schedule a meeting. You need to connect with your account to this link:



    Several roles exist for producing a Skype Meeting Broadcast. These include:

    • Organizer of the broadcast: By using the portal, the producer schedules the meeting
      request, determines who can be a producer or presenter and invites attendees.
    • Producer who is a member of the event team: This role is responsible for starting and
      stopping the broadcast meeting, and controlling whatever media is broadcast.
    • Event team member: Like a presenter who participates in the meeting by presenting
    • Attendee: Watches the event online.

    Scheduling a broadcast meeting is easy: after logging in to the portal, click new meeting.



    This is the page by default, after the meeting creation:



    It’s now time to customize the meeting as the event: add a title, date and time information with the edit option.


    In the attendee section, you can select who can watch this event:



    Presently, three levels of access are offered, but you can use only two (unfortunately not available
    during tech preview):

    • Invitation only: Only those users who have been invited by the broadcast service can attend. You can invite them individually by name or as a group using distribution lists.
    • Open: Anyone from your company can attend by clicking the join link. They'll use their company user name and password to attend.
    • Anonymous: Anyone with the join link can attend, even if they are external to your company.

    In the event team section, you can choose which users will conduct the meeting.




    In the customize event section, you can add some features such as app add-ins with which the audience can engage and participate in the meeting, a customized help URL link or a custom link to access content related at the event.




    At the end of the process, you can see now the new event in your portal.



    You can sign into the broadcast using the join link meeting. It’s the same link for an attendee or event member.



    You’ll need to accept the plugin.


    Connected as an event member, you’ll need to activate a video or add content to start the broadcast meeting. If you have multiple presenters, you can activate feeds by right-clicking the feed you want and choosing Make Active Video. You can have one feed providing audio and another providing video.




    You can upload any PowerPoint decks that will be presented during the meeting.



    You have multiple broadcast layout choices for starting the meeting: video or content only and both.




    After the PowerPoint uploads, you can now start the broadcast meeting. 






    You can stop at any time with the Stop Broadcast button.



    More informations is available by looking the Microsoft ignite session ”Skype for Business: Doing Very Large Online Meetings Very Easily!” :

    Presently in preview, this new meeting service can potentially offer new features before his general availability.


    About the author


    Christophe discovered the Microsoft UC environment from Office Communications Server 2007 and found it simply fantastic. He wanted to be among the first to fully master this technology, so he created the popular blog: /default/b/christophe/default.aspx to share his passion. 

    0 0

    On July, 29th, Microsoft launched Windows 10, the new version combining the classic interface of Windows 7 and the new design of Windows 8. The new version creates a versatile environment allowing the user to adapt to various screen sizes and it offers the user the traditional desktop experience with the keyboard and mouse. With that in mind, the Brazilian community has started to engage and share its knowledge around Windows 10, through events, articles and online engagements.

    Windows Experience MVP, Aurélio Baboo has primarily focused on Windows 10 since June 2015. Within 48 hours of the W10 launch, he provided about 712 answers regarding the launch and adoption of Windows 10.

    Windows Platform Development MVP Alexandre Zollinger Chohfi has presented at several Windows 10 events about Windows 10. He participated in the Win10 Developer Readiness event along with .Net MVP Rodolfo Fadino, where they both presented on “how to develop apps for Windows 10 using the new Universal Windows Platform.” The content presented is available on Channel 9. Congratulations to Alexandre who has already been confirmed as speaker at QCon Rio de Janeiro, to present in a third party event directed to architects and business decision makers on how to develop apps to Windows 10.

    Windows IT Pro MVPs, Alexandro Prado and Eduardo Sena are organizing the Windows IT Pro Readiness, an event for IT professionals and students who want to learn more about Windows 10, virtualization, deployment, security and the overall end user experience. Alexandro is also
    involved with the  MS Weekend organization which brings all Microsoft technologies to a single event. Eduardo is also actively speaking at in-person events across Brazil and preparing material for his blog. Eduardo is one of the most active speakers at the event Quintas de TI, event that takes place throughout the year, with online webcasts every Thursdays at 11 p.m.

    Windows Experience MVP Jamil Lopes, also a Microsoft Regional Director, presented the new features of Windows 10 at Microsoft’s Windows 10 Launch Event which attracted over 600 people. This event was also a benefit which collected over one ton of food to be donated to charities supporting families during the holidays.

    Windows Experience MVP Idevar Jr updated his blog with topics related to Windows 10 and Troubleshooting. He will also be speaking at the Windows 10 IT Pro Ready event introducing the new features of Windows 10.

    Want to know more about the amazing things MVPs are doing with Windows 10? Please visit our official web site and social channels and learn more!

    0 0

    Editor’s note: The following post was written by .NET MVP Sara Silva as part of our Technical Tuesday series

    To use Microsoft Azure it is necessary to link a credit card to the account, otherwise it is not possible to create a new account, which in some cases blocks new clients to try Azure. To help on it Microsoft released the with the goal to provide a simple way to learn and test Azure App Services without an account, for free. In this article I will show you how to create and test Web Apps and Mobile Apps using this reference.


    Microsoft Azure is a cloud computing platform and infrastructure, created by Microsoft, for building, deploying and managing applications and services through a global network of Microsoft-managed datacenters. One of the services provided by Microsoft Azure is the Azure App Services that is a fully managed platform as a service (PaaS) that integrates the Microsoft Azure Websites, Mobile Services, and BizTalk Services into a single service, and it adds new capabilities enabling integration with on-premises or cloud systems.

    Figure 1 Different applications available from Azure App Service
    Note: To learn more about Azure App Services, check my previous article Introduction to Azure App Services.

    For those who do not have an Azure Account, it is possible to try the Azure App Service in the following reference, which allows you to create Web Apps, Mobile Apps, Logical Apps and API Apps without cost and does not require a credit card. This service has some limitations because the main goal is to test Azure App Services and for that reason, it is only available for an hour.
    Creating a new Web App

    To create a new Web App, go to, and select the application type “Web App” as described in figure 2:
    Figure 2 Creating Web App from Azure App Services without an Azure Account
    After this, it is required to select the language and the template. This way, it is possible to choose if it will be an ASP.Net (using C#), Node.js, HTML5, Java or PHP website.

    Figure 3 Choosing the language and the template used by the Web App
    To finalize the process, an authentication is required using one of the social networks provided, as described in figure 4:
    Figure 4 Authenticating using social network
    Then, the Web App will be created and it will provide all resources required for developing, deploying and testing the Web App. As mentioned earlier, this Web App will expires within an hour, as described in figure 5:

    Figure 5 Details provided to work with Web Apps
    Note: With an Azure Trial Account, it is possible to create 10 free apps of any type for 30 days. To get more information have a look at this reference.

    Now it is possible to edit the source code provided by the template in Visual Studio Online “Monaco,” as described in figure 6:
    Figure 6 Editing the source code in Visual Studio Online “Monaco”
    And it provides the Web App content based in the template selected, as described in figure 7:

    Figure 7 The source code provided

    The publishing profile and the git reference are provided when a Web App is created through, which allows pushing the Web App to the server using Visual Studio or git, respectively. Lastly, it is possible to manage the service in the Azure Portal, as describe in figure 8:

    Figure 8 Managing the service in the Azure Portal

    Using the publishing profile

    In the scenario described above there is not any Azure Account which in general is used to deploy the application to the server. Another alternative is to use the publishing profile, which is safer, and it defines all data required to execute a deploy using Visual Studio.

    After creating a new Web App using Visual Studio 2015, as described in figures 9, 10 and 11:

    Figure 9 Creating a new ASP.Net Web Application

    Figure 10 Selecting the MVC Template
    Figure 11 The ASP.Net Web Application
    It is possible to publish it to the server. This way, selecting the project and in the context menu it is possible to choose “Publish,” as described in figure 12:

    Figure 12 Publishing the Web App
    In which the result will be a new window that allows importing the publishing profile, as following:
    Figure 13 Selecting “Import”
    Figure 14 Importing the profile
    After loading the profile, the “Publish Web” window will be displayed, as described in figure 15:
    Figure 15 Publish Web
    In such window, it is possible to verify the connection, with the server, before publish it (this profile expires in an hour after it is created).

    Figure 16 Publish Web
    During the publishing process, it is possible to follow the deploy and the “Azure App Service Activity” allows you to see the output and the details of this process, as described in figure 17:

    Figure 17 The Azure App Service Activity
    When the deploy is finished, the default browser will be opened showing the Web App published, as described in figure 18:

    Figure 18 The Web App published
    Creating a new Mobile App

    To create a Mobile App, go to, and select the application type “Mobile App” as described in figure 19:

    Figure 19 Creating Mobile App from Azure App Services without an Azure Account

    Then, select the template for the Mobile App, in this case the selected template is a .NET Backend template, as described in figure 19:

    Figure 20 Choosing the template for the Mobile App
    Note: Mobile Apps and Web Apps based on ASP.Net Web Applications are the same; the main difference is the fact the Mobile Apps provides an SDK with features focused on Mobile Applications. This way, it is possible to see the Mobile Apps as an extension of Web Apps.

    For the client application it is possible to select a Windows, Native iOS, Xamarin or Web client app, as described in figure 21:

    Figure 21 Choosing the mobile client app
    And in the end it will provide all resources as shown in Web Apps, because both uses ASP.NET and the publishing process at the same.

    Figure 22 Details provided to work with Mobile Apps
    Note: When we open the in the browser a default page will be shown, in which it is possible to select to try it out. Instead of Mobile Apps created using an Azure Account, in this case the service does not require the key to see the help page.

    Running the Windows client app

    The Windows client application provided is a Universal App (figure 23), in which it is possible to see the implementation of the Mobile App: to insert, update and refresh the data.

    Figure 23 The Universal App provided
    For more information on which endpoint it used, have a look at the App.xaml.cs file, in which the code snippet is:

    // This MobileServiceClient has been configured to communicate with the Azure Mobile Service and
    // Azure Gateway using the application key. You're all set to start working with your Mobile Service!
    public static MobileServiceClient MobileService = new MobileServiceClient(

    This means the client app provided has the correct endpoint and it is not required to define it. Running the application it is possible to execute the actions provided: insert, update and refresh.


    When the Mobile App was created, it was published in the “Todo Item services,” in which the content is provided in the last step. For those who want to create a Mobile App using Visual Studio it is possible to publish it using the publishing profile as it was done in the Web App.


    In conclusion, allows any developer to try the Mobile Apps and Web Apps without an Azure account and this way it is very easy to learn and then to test it.

    About the author
    Sara lives in Portugal and studied mathematics at the University of Coimbra. She now focuses on Windows 8 Store, Windows Phone, Xamarin and Azure development for projects within the operational planning of public transport, logistics, payments and multimedia.

    0 0

    Windows 10 has created enormous excitement among the global technical community, and MVPs have been staging fan parties and launch events around the world.
    As part of the celebration, MVPs in India hosted their own online jamboree and Windows 10 MVP Tweetathon to showcase their passion for Windows 10.
    On August 19th starting at 4:00PM India time, 25 Indian MVPs from various Windows specializations including experience, IT pro and development started sharing tips, tricks, feature insights, troubleshooting steps and much more on Twitter with the hashtag #WindowsMVP and #Windows10.
    For more than four hours, the community delivered hundreds of technical tweets, which were well received by more than 65,000 unique direct audience members around the world and generated more than 841,000 impressions.
    Windows 10 has endless possibilities and there is still a lot to be explored. We’ll be sharing more MVP-driven Windows 10 initiatives and experiences!


    0 0

    Editor’s note: The following post was written by SQL Server MVP Martina White as part of our Technical Tuesday series.

    SQL Azure is a compelling platform for data storage. It's cheap, reliable, and highly accessible. However, it's not identical to SQL Server on-premise, and there are a number of things to be aware of when considering a move from one platform to the other, or simply adopting it. This article will walk through what the various SQL Azure options are, outline the processes involved in moving data form on-prem to Azure, and call out some "gotchas", or things to be aware of when operating a SQL Azure database.

    Azure SQL is SQL Server behind the scenes, so most of the functionality is already there including tables, views, stored procedures, triggers, functions, primary and foreign keys and clustered indexes. Of course there is no Windows authentication, and it currently uses SQL authentication only.

    SQL Azure allows for logical vs physical administration. There is no to need maintain, balance, upgrade or patch the server as this is all done by Microsoft. You have no control over the physical database and log files and cannot get to them. Therefore, you cannot detach and attach databases as you cannot get to the physical drives. You also can't reboot the server, so if you end up with a runaway query you may have to open a support ticket. Killing the session doesn't always work.

    Most tools work with SQL Azure database, including:
    • .NET Framework via ADO.NETC / C++ via ODBC
    • Java via Microsoft JDBC provider
    • PHP vis Microsoft PHP provider
    • Frameworks such as OData, Entity Framework WCF Data Services, NHibernate

    Your application talks to the primary database, but there are actually 3 copies kept in sync. There are always 3 copies of the database for high availability during disaster recovery. If your database should go down, one of the secondary copies will be promoted to be the primary database, and new third copy will be created.

    There is a requirement for tables in a SQL Azure database to have a clustered index. This is necessary to keep the 3 copies of the database in sync.

    The maximum SQL Azure database size is currently 500GB, but you can get around this using SQL federations and partitioning your data across multiple nodes.

    There are a number of partially supported and unsupported features. A few of the ones I run into regularly are:
    • You cannot use the USE [databasename] sql statement. You must physically switch between databases in your application.
    • Remove from indexes - NOT FOR REPLICATION

    You can review a full list of unsupported features here:

    SQL Azure database does use transact SQL. There are a number of ways to connect to the database, including Windows Azure Management Portal, SSMS (2008 R2 or newer), or command line utilities such as SQLCMD and BCP. You can move data to and from your SQL Azure database using anything that can connect to the database, such as SSIS, SSMS. When it comes to migrating your database to SQL Azure there is a great article on what to consider, here:

    When I migrate a database from SQL to SQL Azure, I typically follow this process using SSMS:

    • Create a blank database on the SQL Azure database server
    • Generate the scripts from the original database to create the database objects, excluding users
    • Do a find and replace to remove any unsupported features such as the two mentioned above
    • Run the create database object scripts against the new SQL Azure database
    • Create the users and apply permissions for the new database
    • Use SSMS or SSIS to copy the data over to the new database.

    The SQL Database Management Portal is a web based, scaled down version of SSMS. You can create objects, and run queries and execution plans. But there is no GUI interface for some of the security features like creating users and logins. I find that it's a friendlier experience to create the database server in the portal, and do everything else using SSMS.


    SQL Azure databases are protected by an automatic backup system. The length of time the backups are retained depends on what tier you buy – 7 days for Basic, 14 days for Standard and 35 days for Premium. The point-in-time restore is a self-service feature that costs you nothing unless you use it. If you use it, you pay regular rates for the new database that gets restored. You get all of the protection without any additional cost.


    You are in complete control the IP specific access to SQL Azure Database, at both the server AND database level. No one has access by default. At the server level you go into the Management Portal and add your IP in order to be able to connect remotely to the database. Or if you prefer a more granular level of security you can add a firewall rule at the database level to allow specific IPs to connect. Yes, every time your IP changes, you have to update your firewall rules.


    There are three tiers, with several levels of performance within them. I will summarize the Microsoft definitions.
    • Basic: Best suited for a small size database, supporting typically one single active operation at a given time.
    • Standard: The go-to option for most cloud applications, supporting multiple concurrent queries.
    • Premium: Designed for high transactional volume, supporting a large number of concurrent users and requiring the highest level of business continuity capabilities.

    Within the tiers are various performance levels. The definition of a Performance Level includes something called a Database Throughput Unit (DTU. DTUs provide a way to describe the relative capacity of a performance level of Basic, Standard, and Premium databases. DTUs are based on a blended measure of CPU, memory, reads, and writes. As DTUs increase, the power offered by the performance level increases. For example, a performance level with 5 DTUs has five times more power than a performance level with 1 DTU. There is more involved, such as maximum concurrent requests, logins and sessions, but at a high level these are the performance levels from which you can choose:

    You'll want to monitor and play with the levels to find the optimum for you needs. There is a Monitor tab for each database in the Azure Portal where you can review the performance. You can easily change the performance level on the fly. If you are anticipating heavier use, you can up the performance level for a period of time, and then drop it back down when the heavy use period is over. You only pay for the performance while it was selected.


    The pricing varies greatly depending on how many databases you have, how long they have been up, how big they are, what performance level you select, etc. Microsoft advertises no upfront cost, no termination fees, pay only for what you use, and pay by the hour.

    Costs can range anywhere from $7 per month for the Basic tier, $19 - $183 per month for a 250GB database in the Standard tier, to $566 to $8500 per month in the Premium tier.



    Create a Windows Azure Platform account
    If you don't already have one, you'll need to create a Windows Azure Platform account. There is a one month free trial on offer. This will give you access to all the Azure services including VMs, SQL databases, websites, AD, etc. To sign up go to Sign in to the Microsoft Azure Portal at

    Create a Database Server
    When you are ready to create a SQL Azure database, you must first add a server to your subscription on which the database will reside in the cloud. Azure will provision it to the least busy server available in the cloud. Health of physical servers are monitored and maintained by Microsoft. You don't manage these servers yourself. Microsoft does this for you. You don't need to wait for I.T. to provision a server for you, which could take weeks. Simply sign in, create a server and the admin login credentials, and start adding databases. You can have a server and databases set up in a matter of minutes. To automate server provisioning you can also use PowerShell or REST API calls directly.

    Below is an image of the Microsoft Azure Portal. Notice that we are looking at the SQL Databases, Servers tab. You can add a new server by clicking the add button at the bottom of the screen.

    • Log in to Windows Azure Management Portal.
    • Create a SQL database server.
    • Create admin login credentials for that server. This will be used like the SA account.
    • Choose the subscription to which you want to add the server, and choose region of the data center where you would like the server to reside.
    • You do not get to choose the name of the server. It's randomly generated.
    • Decide if you will "Allow Windows Azure Services", the default is yes. If you were to create a Windows Azure website or cloud service, then services would have access to SQL Azure database server and databases.  You can edit this setting the server Configure tab if you change your mind later.
    • Click OK. The server will be created in a matter of seconds.

    Add Firewall Rules at Server Level
    Once you have created the server, you will grab the name of the server, and will likely want to connect to it using SSMS or some other tool. You will connect using the url Your connection will fail, until you add your IP address to the firewall rules.

    To add firewall rules and enable service access:
    • Click on the database server you created.
    • Select the Configure tab.
    • You will see your current IP address near the top of the screen
    • Add a rule to allow the IP address range that will be connecting to the database and give it a name


    Now you can connect via automatically generated FQDN ( which initially contains only the master database.

    Create a Database

    When you create a new database there a number of settings from which to choose, including Service Tiers and Performance Level.

    Note: There is an option to select WEB or BUSINESS edition, but this will be retired in September 2015.
    Edition: Web (select 1GB or 5GB) or Business (select 10GB to 150GB)

    If you create a database in SSMS, it will automatically create a 1GB web edition unless you specify otherwise. To create a database in SSMS you can run this sql script:
    CREATE DATABASE <yourdatabasename> (Edition ='Business', Maxsize=150gb)

    • Key in your database name
    • Choose which subscription you are adding this database to
    • Choose the Service Tier
    • Choose the Performance Level
    • Choose the Collation
    • Choose on which database server you want the database to reside


    Add Firewall Rules at Database Level:
    You can add firewall rules at the database server level or at the database level. You can do this at the database level using the stored procedure sp_set_database_firewall_rule.

    Run this SQL command against your database:
    EXEC sp_set_database_firewall_rule N'MyRule', '',''

    To look up existing database firewall rules use this query:
    SELECT * FROM sys.database_firewall_rules

    There are a number of differences and adjustments you may need to make to move to SQL Azure, from unsupported SQL features to managing firewall security. If these are not an issue for you, there is great opportunity to leverage the ease, scalability and reliability of SQL Azure databases. You pay only for what you use and someone else manages the server maintenance, upgrades and backups. It's a sweet deal.

    About the author
    Martina is a senior consultant in business intelligence. She has 15 years of experience spanning agriculture, healthcare, insurance, education, government, utilities, spirits and wines, retail, manufacturing and software industries in Canada and the U.S., in all aspects of the project life cycle from requirements gathering and strategy to solution development. Martina holds the Microsoft Most Valuable Professional (MVP) status in the SQL space. Her blog is widely followed with 200,000 visitors per year and growing. Her focus is currently in business information solutions including requirements gathering, recommendations on best approach, data warehouse design, cube development and reporting solutions development. She is recognized as a subject matter expert in the Microsoft SQL Server product suite. She also holds PMI’s Project Management Professional (PMP®) credential since 2005.


    0 0

    Annually providing top-notch technical sessions and workshops to 100 university students around Turkey, Microsoft Turkey Summer Camp kicked off on August 3rd at the Microsoft Turkey Istanbul office and will run until the end of the month. Also called as Open Academy Summer Camp, this event provides an opportunity for students to take the first prominent step on their career paths. Students had the opportunity to meet industry leaders and possibly open doors to their desired occupations.
    With an agenda focusing on “Mobile First, Cloud First” motto and consisting on technologies such as Azure, .NET, SQL Server, and Windows 10 application development, this year, Microsoft Turkey provided the students who cannot join the camp in person an opportunity to attend business sessions delivered by members of Microsoft. Technical sessions focusing Office 365, Microsoft Azure, .NET Development and SQL Server were delivered by Microsoft Azure MVP Daron Yondem, Windows Platform Development MVP Emrah Uslu, Windows Platform Development MVP Engin Polat, SQL Server MVP Osman Cokakoglu, and SharePoint Server MVP Burak Batur. As the event was streamed online, it has successfully reached an audience of 55,000 attendees in 26 days.
    At the end of the 26-day Summer Camp, students had the opportunity to present their final projects to industry leaders, Members of Microsoft, MVPs, and the press. The top five teams were given Lenovo Yoga 13, Dell Venue 8, Acer Iconia W3, Nokia Lumia 820, and Nokia Lumia 720 as prizes for successfully completing their projects.



    0 0

    Editor’s note: The following post was written by .NET MVP Troy Garner as part of our Technical Tuesday series.

    The phrase “The Cloud” is usually regarded as a mystery to many at first, yet with the increasing demand of having information synchronized across various devices and platforms, services like OneDrive have been increasingly recognized as popular solutions for centralized data storage. Whether you know how the cloud works or not, the inner-workings of the cloud are usually not discussed in too much detail because implementation can vary – and this might be why it is such a big mystery to some. The phrase itself only has a general definition.

    Recently, Microsoft released the OneDrive REST API which provides developers with a flexible and scalable interface to interact with their cloud-based hosting service, and this means bigger and better opportunities for data manipulation across virtually any platform, with little to no headaches.

    What is REST?
    Although this article is not specifically about REST, I believe it is important to have an abstract understanding of the fundamentals of REST to become familiar with what the OneDrive API is all about.
    “REST” is an acronym related to an architectural design principle, which stands for “Representational State Transfer.” It’s implemented within a stateless, client-server, cacheable communications protocol – which in most cases is HTTP. The behavior of the “RESTful” design is geared towards networked applications with the idea that HTTP can sort of “mask off” the complexities of other alternative mechanisms to make calls between machines.

    Benefits of the OneDrive REST API?
    A few advantages of REST is that it is very fully-featured, lightweight, and easy to use. Of course, I am not here to talk to you entirely about what REST is and why it is used, but rather, explain why this promotes great opportunities for OneDrive integration. Here are a few good points to get you started:

    1. Because this API is “RESTful” you are not limited to any specific platform or language of choice, as long as you have the ability to communicate with the API over HTTP.
    2. The API provides a secure authentication layer (OAuth 2.0), and by default, all resources will be returned with secure HTTPS URLs.
    3. Permissions are “scoped” and user consent is requested before access is granted to any portion of OneDrive through the REST API.
    4. JSON is used for the HTTP requests and responses, which is nice because JSON is a very minimalistic data structure, and this helps especially when lightening the load of incoming and outgoing requests over a network.
    5. Throttling limits for OneDrive requests ensure that the experience for others is not degraded, but the response of exceeding the limit provides a convenient “Retry-After” header which specifies when the rejected requests timeout period will expire. (Note: As a developer, you’ll know that you’ve exceeded the limit if you receive a 429 HTTP response code, which indicates “Too Many Requests.)
    6. All of the most common HTTP verbs are supported (GET, POST, PUT, DELETE, MOVE, COPY), and in cases where a verb is blocked, or cannot be used for some reason, verb tunneling through a POST request can still be used to allow a specific verb to be used.
    7. Large file upload support (up to 10GB - chunked upload), upload from URL, CORS support, resumable downloads, and more…
    Additionally, samples are already freely available for JavaScript, iOS, and Android developers so that non-Windows developers (and web developers) can feel at home if they want to take advantage of the large set of features the OneDrive API provides.

    An example response for retrieving the default drive is shown below: 
    Feature Overview
    A few features of the OneDrive REST API include the ability to:
    1. Download and upload files
    2. Get user data
    3. Obtain user consent
    4. View file and folder properties
    5. Move, copy, create, or delete files and folders
    6. Create, update, read, or delete albums
    7. Read, create, or delete tags

    With all these features embedded within the powerful OneDrive REST API, the purpose of the API itself is only limited to one’s imagination. Just make sure to read the OneDrive API Terms and Conditions as well as the Branding Guidelines if you choose to incorporate OneDrive into your apps. I wouldn’t consider this API to be very small, therefore a good starting point would be to take a look at this page here on Developing with the OneDrive API.

    There are a few things to consider before deciding the that OneDrive API is the right candidate for you to be using as a developer, just like any other possible candidate involved when considering your application design. Microsoft makes a note that although OneDrive stores OneNote notebooks, it is not recommended to be using the OneDrive API to work with OneNote notebooks – instead, they suggest that you use the OneNote API, but that is out of this articles scope unfortunately so I won’t go into detail on that one.
    One of the best features about the OneDrive REST API, for me, is the ability to resume downloads by using the HTTP Range header to specify the range of the bytes not yet downloaded. I don’t know how many times I’ve used other services and had a download interrupted by an unreliable network or some other factor.

    Nonetheless, good luck! I hope that there are people out there who will start to consider OneDrive as their preferred cloud-based file storage service after witnessing the power of these APIs firsthand!

    About the author
    Troy Garner is a technical evangelist, as well as an aspiring programmer with a keen interest in developer security, reverse engineering, crash dump analysis, and other similar topics. Since first learning about computers through an old Windows 95 machine, he hopes to contribute to the technical world in a positive way while learning new things and sharing his knowledge with others, and has been re-awarded as a Microsoft MVP for 2015 as a .NET Programmer for the 4th consecutive year from the first award in 2012 for Visual Basic.


    0 0

    Editor’s note: The following post was written by Windows Platform Development MVP Matt Lacey as part of our Technical Tuesday series.

    There are lots of different types of testing. In this article I’m going to refer to what is often called “beta testing”. That is testing that is performed by a small number of people outside of the core development team of an app. These “beta testers”, from now on just referred to as testers, may be part of the organization that is creating the app or members of the public. The purpose of beta testing is to gain further confirmation that the app behaves as intended in real world environments and gain feedback from users before the app is released to a wider audience.

    Beta testing should not be seen as an alternative to testing done internally and there is rarely a good reason to give testers a version of an app that has lots of bugs in it. Beta testing instead extends the breadth of environments that the app can be tested in. This might include different hardware devices, more configurations or localizations that cannot be tested internally due to limits on skills or resources.

    When it comes to beta testing, unfortunately, many developers release the app to their testers and then just sit back and hope for feedback. In many scenarios just hoping for feedback isn’t sufficient and so to ensure you're getting both the most feedback possible and the test coverage you would like it is important to be proactive about capturing feedback and monitoring what the testers are doing.

    Application Insights ( provides a way of satisfying both of these goals. It allows us to receive details of any problems that testers may encounter and also to see what they are actually testing.

    One of the most important reasons for beta testing is to ensure that the app works as intended. One major way an app may not work as intended is if there are unhandled exceptions that cause the app to crash.

    By default, Application Insights will record all unhandled exceptions for you. Even if you have your own way of recording exceptions and allowing testers to email those exception details to you it can be valuable to also record them with Application Insights. Doing so will mean you can be sure of getting all the details, as long as they have a connection to the internet, and will also put all the details in one place making them easier to analyze or spot trends than if you have to extract the details from the bodies of emails.

    Even with these benefits it is still valuable to allow testers to submit exception reports via email. It is not always possible to reproduce an error from an exception report or stack trace and so being able to ask the person what they were doing when an error occurred can be invaluable. Additionally, it is also often beneficial to the relationship with your testers to be able to tell them when an exception they encountered has been fixed. Being able to reply to their email is a great way to do this as it is personal and shows that you value the time and effort spent to report the issue. 

    The above image shows a number of crashes. There is definitely an issue with range checking that should be investigated and fixed but in reality there are more things that may happen when using an app that we’d like to prevent and which aren’t crashing exceptions.
    In some scenarios we may trap situations which would otherwise cause an error in code and report them directly to the user. While not directly an error, this may be something we want to know about as it may indicate an issue with the usability of an app.
    The simplest way to track scenarios like this is with custom events.
    var tc = new TelemetryClient();

    The above code snippet shows how a user failing to check the “I accept the Terms and Conditions of usage” box when trying to register may be recorded. In this scenario, client validation failure means the user is stopped from doing something when they tried. The image below shows the client validation failures that were recorded in the registration page of one app during testing. 2

    This is from the event Filter as it is how we can clearly see the number of times each event was recorded.

    The high number of validation failures where the password was not entered the same way twice, may indicate that the registration page may need some improvements. It is however, also important to consider these figures in context. What these figures don’t show is how many people were able to successfully register. Without this knowledge the above figures could be either small, and so not worth considering, or large and so indicative of a major potential problem. There are two ways this could be addressed. First an event could be recorded when validation is successfully passed. This would provide a data point in the Application Insights dashboard that is easy to compare. Alternatively, we could compare with figures gathered elsewhere. In this instance we could look at how many people successfully registered during the testing period by counting the number of new accounts created on the backend. With knowledge of this figure we could then make informed decisions about the consequences of the number of client validation failures. There is no hard and fast rule about whether it is best to record successful actions as Application Insights events or not. It will depend on individual circumstances and whether the data is also available separately or not.

    In addition to knowing when things go wrong it’s important to know what is actually being tested. It is not uncommon for some people to volunteer to test an application but never actually get round to it or never look beyond the first few pages in an app. For this reason, just having had people volunteer to test an application does not provide any guarantee of the volume or extensiveness of that testing. We want to be confident that our app has been exposed to and used in a wide variety of environments before we make it public but just having people say they will test it is not the same as them actually doing so. If we want to be confident that testers are doing what we want it is necessary to track their activity. The most basic statistic for tracking what testers are doing is to record page views.

    Page views are recorded by default when Application Insights are added to a project and provide a simple metric that can indicate how much testing each part of an app has received based on the number of views.

    While a simple metric it can easily be skewed as, depending on the navigation structure of an app, the default behavior may not be what you want. The default behavior is to record a “view” every time a person navigates to a page. This is not just new pages but also includes navigating back to a page viewed previously. Consider a simple, two page, master-detail app scenario where the user launches the app on the main page, clicks something to navigate to the detail page and then presses back to return to the main page, this would result in two page views for the main page and one for the detail page. Depending on your requirements this may not be what you’re after. Fortunately it’s easy to change.
    The first step is to disable the default behavior.

    In a Universal Windows Platform (UWP) application you disable the automatic recording of page views by removing (commenting out or deleting) the PageViewTelemetryModule from the ApplicationInsights.config file.

    In a Windows 8.1 or Windows Phone 8.1 project you disable the automatic recording of page views by not specifying it as a collector when initializing the telemetry collection.
        WindowsCollectors.Metadata | WindowsCollectors.Session | WindowsCollectors.UnhandledException);

    If we want to record page views ourselves it is a very simple task to do so. The code below shows the simplest way of doing so and also prevents views being counted multiple times as the user navigates away from and then back to the same page.
    protected override void OnNavigatedTo(NavigationEventArgs e)

        if (e.NavigationMode != NavigationMode.Back)
            var tc = new TelemetryClient();

    With page view tracking in place we can see which pages in the app the testers are viewing. 

    The aim here is to check that all parts of the app are being adequately used and therefore tested. From the above chart we can see that very few of our testers have visited the Login page. There may be many reasons for this but it is probably a page that we would want to see tested further before releasing the app to the public.

    For most pages, in most apps, we expect the person using the app to do more than just navigate to the page. As such, we want to make sure our testers are actually using the app in the ways intended and not just navigating around the different pages. To know what the testers are actually doing in the app it is useful to track specific events. The exact events that will be appropriate to track will vary depending on the specific app but may include activities such as sharing; exporting or importing data; altering settings; or searching.

    With some of these events it can also be useful to record additional related properties. Application Insights includes a way to easily do this by providing them in a dictionary, as below.
    private void Search(string searchTerm)
        if (string.IsNullOrWhiteSpace(searchTerm)) return;

        var tc = new TelemetryClient();
        var properties = new Dictionary<string, string>
                                 { { "term", searchTerm } };
        tc.TrackEvent("Search", properties);


    An app may also have pages that display different things at different times. This may be a generic page that behaves differently or with different layouts depending on the data being displayed. If we have a single page that could display a user, a product, or a picture it may not be very useful to know the total number of times that page was visited. It is normally much more useful to know how many times it is used to display each of the different items.

    Additionally if an app is localized to support multiple languages it can be important to check that all parts of the app are being tested in different languages. Because Application Insights doesn’t make accessing the device language easy to query and filter, I’ve found it useful to combine this with the page name and any other key information affecting how the page is displayed as a custom name which, when tracked, is much easier to query on the portal.
    protected override void OnNavigatedTo(NavigationEventArgs e)
        var type = ((Item)e.Parameter).TypeName;

        var pageName = string.Format(

        var tc = new TelemetryClient();

    The telemetry gathered in this way makes it easy to see how the app is being used and the languages that the testers are using. The data below is the list of page views filtered to the Detail page. From here it is clear that while the page has been used to view each of the item types many times it will probably be beneficial to have some more testing of the page by testers who have their device in German, assuming the app is localized to German. 

    In addition to testing with devices in each language an app is localized for it is also beneficial to ensure that our testers are representative of actual users. Consider a group of testers who are all new to an app that provides access to data collected elsewhere. If the testers are new to the service they may not have collected much data previously and therefore need to load and display and so find the app is very fast. However, a long time user of the service may have lots of data and find the app very slow to load.

    Or consider a social networking app that allows the user to retrieve a list of all their friends. With only a small number of friends the app may be very responsive but may start running slower as they make more friends in the app.

    For many apps, different people will have very different experiences based on their settings and usage. It is important to monitor this to ensure that your testers cover a wide variety of users (or potential users if the app is brand new) and to ensure that all of them have a good experience with the app. As a starting point for doing this you can collect some simple metrics like those below.
    var sw = new Stopwatch();
    List<string> friendNames = await GetFriendNamesAsync();

    var tc = new TelemetryClient();
    var measurements = new Dictionary<string, double>
        { "friendCount", friendNames.Count },
        { "loadTime", sw.ElapsedMilliseconds }
    tc.TrackEvent("LoadedFriendNames", metrics: measurements);

    Below we can see an example of such data in the portal. By filtering to the specific event of interest, it is easy to quickly go through the details and ensure a representative spread of values in the data and that there are no significantly large load times. 

    It is important to note that if you start to record more information about your testers as part of your event tracking be sure to avoid any privacy issues around the use and storage of personal data by not capturing anything personally identifiable.

    In this article I’ve shown some of the reasons it’s important to monitor what your beta testers are doing with the app and how they’re using it. Application Insights provides a number of ways we can record this information to help us ensure that the people testing our apps are actually using/testing all of it and doing so in ways that reflects what actual users will do. With this information we can be confident of releasing a higher quality app that more users will love.

    About the author
    Matt is a mobile development consultant and also runs the Windows Apps London user group. Follow him @mrlacey.


    0 0

    This month IT professionals and developers came together for one of Taiwan’s most significant technology exchange events: TechDays Taiwan.

    Hosted between September 15 – 17, TechDays adopted the open and cross-platform spirit of Microsoft and followed "The Power of IT" as the core theme. Across the three days and 105 breakout sessions, topics ranged from Windows 10, Universal App Development, Office 2016, the organization information analysis tool Delve, presentation tool Sway, Power BI and much more.

    Well over a third of the sessions were deliverd by 22 MVPs. Highly popular, they spanned an array of technologies.


    Participants had the opportunity to interact with MVPs in formal and informal settings. The region’s top four user groups— twMVC,, SQL PASS and KNET—also contributed to the event.

    You can catch all the excitement around TechDays Taiwan as well as learn from various industry experts who have shared their knowledge at TechDays 2015 via Channel 9:


    0 0

    Editor’s note: The following post was written by PowerPoint MVP Glenna Shaw as part of our Technical Tuesday series.
    Tables are a powerful tool for visually organizing data, but a table takes time for an audience to absorb. When you’re giving a presentation you want your audience to listen to what you have to say, not looking at a table trying to decipher the patterns in the data.

    So how can you make a table that your audience can absorb at a glance? The answer is turn it into a heat map. A heat map uses color to indicate values or ranges in value, making it easier for the viewer to see the patterns in the data.

    The difference is easily seen in this example:
    Data Table 
    vs. Heat Map 
    Without the heat map it’s nearly impossible to see that the Post-Bachelor’s degree is as effective for earning potential as the Post-Doctorate.

    Match Formatting in PowerPoint and Excel
    Unless you want to color the PowerPoint table manually, your best option for creating a heat map is to do it in Excel. Since you’ll be copying the heat map into PowerPoint the first thing you want to do is make sure you’re using the same theme for both PowerPoint and Excel. If you don’t know how, follow these instructions for PowerPoint: and these instructions for Excel: This ensures you’re using the same colors, fonts and effects for your heat map and your presentation.

    Using Excel to Create the Heat Map from a Table
    Excel’s conditional formatting feature makes creating a heat map very easy. To create a heat map like the one in the example all you have to do is highlight all the values in your table, click on Conditional Formatting in the Home tab, click Color Scales and choose the one you want. I choose the Green – White Color Scale and then modified it by clicking on Conditional Formatting, Manage Rules, and Edit Rule and changed the white to light blue and the green to dark blue. 

    To hide the values in the table, simply highlight all the values in your table and click Format in the Home tab, click Format Cells and apply a custom format of three semi-colons (;;;) as the type. 

    Creating a Custom Heat Map
    If the standard conditional formatting options don’t meet your needs, it’s pretty easy to create your own. In the example below, I applied conditional formatting to the same table to show the values that fall within the ranges for income classes in the United States. 

    In order to get this type of formatting, I created a conditional formatting rule for each level of income class; from lower class to upper class. To do this, I selected all the values in my table and clicked on Conditional Formatting, Highlight Cell Rules, Between…, set the values for lower class annual salary range (between 1 and 16000) and formatted the cell with a very light blue fill. I then repeated these steps for each of the remaining income class ranges choosing a slightly darker blue fill as I progressed through the income class levels. This gave me 5 conditional formatting rules applied to the same cells, but, as you can see in the example, the values in my table meet the criteria of only three of the rules. Without conditional formatting, I would’ve had to figure this out manually. 


    The Legend Controversy
    No matter what type of heat map you chose to make, you’re going to have to offer some type of explanation to help your audience recognize the pattern. Current trends prefer direct labeling of values like I’ve shown in the heat map for the income classes by using text boxes. This works great when you only have a few colors to label. For a heat map that shows a large variety of colors you have a number of options:
    1. You can simply tell the audience that the darker the color the higher the value and vice versa.
    2. You can add a shape formatted with a gradient using your two colors and label it.
    3. You can format the cells for the highest and lowest values to show the values in the cell.
    4. If you want to provide more detailed information you can include a legend like I’ve shown in the first example. To create this legend, I first used a formula to create equal increments of the values between the highest and lowest value and then applied the same conditional formatting to it as the table. It’s important to note that using this type of legend will be more of a distraction for your audience, so I might save it for a hand out instead of a presentation.

    Adding Your Heat Maps to a Presentation

    The best option to getting your heat maps into a PowerPoint presentation is to copy and paste them while retaining the original formatting. Make sure you’ve done all your formatting in Excel before you copy the heat map. This includes any labels. Then select Copy from the Home tab. Your selected area will now be bounded by a dashed line.

    Go to your presentation and click on the border of your placeholder to select the entire placeholder (this will help with sizing your heat map to the slide). Make sure you’ve selected the entire placeholder. You’ll know this because the handles will appear on the corners of the placeholder instead of your cursor inside it. In PowerPoint, click the Paste drop down on the Home tab and click the second icon to Keep Source Formatting.

    This will paste your heat map into your slide. As you can see by the example below, you’ll probably need to make some adjustments. Because the entire thing is a table, you can click and drag the column width and row height as desired until you get the look you want. It’s important that the heat map cells are exactly the same size as each other, so once you’ve formatted all the text columns and rows surrounding your heat map, select all the colored cells and click on Table Tools, Layout, Distribute Rows, and Distribute Columns. In my example, I also have to move and adjust my text labels. The final result is an attractive heat map that is much more effective in a presentation than a table.
    This method copies the heat map into a PowerPoint table. If your table data frequently changes you may want to link your heat map to your presentation instead. Linking tables from Excel to PowerPoint is tricky at best. For my part, I prefer just to recopy/paste into PowerPoint since I can do it faster. But if you do want to link your heat map see my tutorial here about Dynamic Tables in PowerPoint for how to do it:

    Finally, I’d like to give a nod to Excel MVP Jon Peltier for his excellent tutorials on Excel. Most of what I know about Excel came from his site. His site is:

    The files used to create this tutorial are available for download as a heat file at: is a site so you must be able to access it to download the files. My other tutorials and sample files are also available.

    About the author

    Glenna is the owner of the PPT Magic and web sites and the Visualology blog. She is a Project Management Professional (PMP) and holds certificates in Knowledge Management, Accessible Information Technology, Graphic Design and Professional Technical Writing. Follow her on Twitter @GlennaShaw.

    0 0

    Today, 987 exemplary community leaders around the world found out they have received the MVP Award!

    These individuals were chosen because they have demonstrated their deep commitment to helping others make the most of their technology, voluntarily sharing their passion and real-world knowledge of Microsoft products with the community. While there are more than 100 million social and technical community members, only a small portion are selected to be recognized as MVPs. Each year, around 4,000 MVPs are honored. They are nominated by Microsoft, other community individuals, or in some cases themselves. Candidates are rigorously evaluated for their technical expertise, community leadership, and voluntary community contributions for the previous year. They reflect Microsoft’s global community, today coming from more than 90 countries and speaking more than 40 different languages. Together, they answer more than 10 million questions a year! MVPs are recognized each quarter for this annual award, which continues to grow and evolve to reflect the development of Microsoft technologies. Congratulations to the new MVPs, and welcome back to renewed MVPs. We are very excited to recognize your amazing accomplishments!

    And if you know (or are!) an awesome community leader, go here to make a nomination.


    0 0
  • 10/02/15--10:02: Friday Five: In the Cloud

    SQL Server MVP Melissa Coates @SQLChick : Overview Power BI V2 Features End-to-End

    SharePoint Server MVP Nicki Borell @NickiBorell: The Anatomy of Office 365 Groups

    Enterprise Security MVP Debra Shinder: Active Directory in the Cloud

    SQL Server MVP Nigel Sammy @NigelSammy: Finding Your Active O365 Usage

    SharePoint Server MVP Matthew McDermott @MatthewMcD: Office 365 User Management with PowerShell

    0 0

    The second annual MVPDays Community Roadshow wrapped up recently in Western Canada. The conference focused on IT Professionals sharing their knowledge with other IT Professionals, allowing local communities to learn more and advance their skills based on real world experiences.

    MVPDays was the brainchild of Canadian Hyper-V MVP Dave Kawula, who was looking for a way to give back to the same community that helped teach and mentor him for years, leading to his success in the IT industry. Dave enlisted the assistance of his fellow MVPs to help deliver this highly successful community conference.

    MVPDays was hosted in Vancouver, Calgary and Edmonton on September 21st, 23rd and 25th respectively. Across the three days, each event was focused on Cloud, IT Pro and Blackbelt topics. There were more than 50 sessions that ranged from Windows 10, PowerShell, Microsoft Azure, Virtualization, DevOps, Hyper-V and much more.

    The successful roadshow featured a total of 23 MVPs—with 18 MVPs presenting and 5 MVPs volunteering—and reached more than 400 attendees!


    Dave is planning to host an east coast version of the roadshow in early 2016. He is targeting Toronto, Ottawa and Montreal as the cities for the tour. Stay tuned for more details via Channel 9: and at

    0 0

    Editor’s note: The following post was written by Skype for Business MVP Dino Caputo as part of our Technical Tuesday series.

    Skype for Business Server 2015 represents the first time Microsoft has allowed for an “in-place” upgrade of previous versions of its Universal Communications platform.  Since the Live Communication Server 2005 days, moving to the next release required building out a new set of infrastructure, planning, testing and finally moving users over to it.  This latest release introduces the option to do what is referred to as an in-place upgrade, allowing organizations to upgrade existing Lync 2013 Servers to Skype for Business 2015. 
    This can be especially relevant and appropriate if
    • You have recently migrated to Lync Server 2013 and have new hardware that is capable of running SfB (the hardware specs have not changed from Lync Server 2013) The In-place upgrade seeks to preserve existing hardware investments, reducing the overall cost to deploy Skype for Business Server 2015.
    • You have a smaller deployment perhaps running a single Lync Server 2013 Standard Edition
    • You are able to tolerate a small amount of downtime (1-2 hours).  Depending on your configuration you may be able to avoid this however. 
    A limitation:
    • This isn’t possible for organizations still running Lync Server 2010.  Even if you have both Lync 2010 and 2013 Server in your environment you must fully decommission Lync 2010 before you can introduce Skype for business Server.

    In Place Upgrade Options
    The Offline method (Some Downtime required)
    You must use the offline method if you only have a single Lync 2013 Standard or Enterprise Edition Pool.  The offline method as the name implies requires you to take the entire pool down for maintenance while the upgrade is occurring.  During this time all functionality is unavailable until the upgrade process has completed.  I would budget 60-90 minutes per front end server so plan accordingly.

    The Move User method (No Downtime)
    If you have deployed two pools in your environment then this opens up another option whereby you may move all the users over to another pool prior to commencing the upgrade.  The second pool should have enough capacity to handle the load of any users currently homed on it as well as the ones you are moving over.  Before moving the users over, ensure you have enough capacity to handle the load of these users on the other pool.  If these pools were part of a “Paired” configuration you do not need to break the pair prior to the upgrade process.   The following diagram illustrates this process in detail.  
    Expert Note: As part of Move Users you also will need to migrate the global conference directories associated with the primary pool. PSTN dial-in conferencing will still resolve ConferenceID to the pool being upgraded, instead of the paired pool. So you need to move Conference Directories, if you still want PSTN conferences scheduled in the pool to be accessible during upgrade.

    Upgrading other Lync Server 2013 Roles
    The upgrade to Skype for Business applies to other Lync Server 2013 roles as well.  Upgrade your topology from the inside to the outside. Upgrade all your pools first, then the edge servers, and lastly the pool that owns the Lync Central Management Store (CMS)
    Upgrading other roles like the Edge Server role takes less time.  Budget up to 60 minutes per Edge Server. 
    Communicate with Users!
    Despite minimizing or even avoiding downtime depending on the option you choose, you should always plan and properly communicate with users letting them know what will be happening.  In the best case scenario, they shouldn’t notice anything occurring, however if you don’t plan or communicate properly and you encounter issues there may be user disruption which diminishes their experience.
    Take advantage of new features
    Once you have fully upgraded your environment you can now take advantage of the some of the new features in Skype for Business 2015, namely:

    Enhanced commercial Skype Federation

    View commercial Skype contacts and add them to your Contact List so you may see presence, IM, and call them including video!

    Call Monitor

    I especially like the addition of the signature feature from the consumer version of the Skype Call Monitor which is automatically instantiated when you click on other windows making it easier to collaborate by focusing on working content when in a call. Mid-Call controls are also more easily accessible and redesigned to include the PSTN dialing Pad by default now (hooray!)

    Rate the Call Quality

    After your call is done you may be asked to rate your call and provide feedback to your administrators.  This may be especially valuable during a new deployment.  You may control the cadence to which this is presented to users.

    Call Via Work

    Another important addition is “Call via Work” feature which will allow interoperability to existing PBX phones. While this requires a Direct SIP integration between Skype for Business Server and the PBX it will allow SfB users to establish a call between their PBX Deskphone and the called PSTN user. This also has the important benefit of updating users’ presence status, which is something lacking in the Lync Direct SIP interoperability scenario. There are also basic mid-call control capabilities which are preserved on the PBX phone.

    New reporting and Admin features
    Call quality dashboard – an OLAP cube that allows more customized reports of historical Call Quality and other Metrics
    Manageability improvements (Pool shutdown/start up cmdlets - better patching process: Invoke-CsComputerFailOver, Invoke-CsComputerFailback, Start-CsPool)

    Make a Plan!
    While the addition of an in-place upgrade option is certainly an appealing and welcome one it may not be the appropriate course of action for all deployments so please take the time to properly plan which path you need to take to get to Skype for Business Server 2015.

    About the author
    As a Senior Architect and Skype for Business MVP, Dino gets to do what he does best – help clients achieve their business goals by providing advanced technical solutions. It’s his ability to develop technical strategies, understand client needs and deliver performance outcomes that have earned him a reputation as a seasoned IT professional and true team player.

    Dino holds many certifications including an MCSE, and is a Microsoft Certified Technology Specialist in Live Communications Server (LCS) 2005, Office Communications Server (OCS) 2007 and a MCTS in Lync Server 2010 and 2013.   He works primarily with Skype for Business and Exchange.

    Dino has worked with many large enterprise customers providing UC design, strategy and technical “Coaching” helping them derive the most out of their investment in Microsoft Universal Communications. Follow him on Twitter @dinocaputo.



    0 0

    Exciting news for the MVP community today! Steve Guggenheimer (Guggs), corporate vice president of Microsoft's Developer eXperience & Evangelism (DX) group, talks about changes to the award and what MVPs mean to Microsoft in his blog. Check it out here.

    0 0
  • 10/09/15--10:34: Check Out the Friday Fives!
  • 0 0

    Editor’s note: The following post was written by Microsoft Azure MVP Damir Dobric as part of our Technical Tuesday series.

    Service Fabric offers several types (concepts) of scalability and reliability. For example you can setup multiple instances of a service or you can setup multiple partitions of a service. You can even combine both types. This is definitely a powerful concept, but in some cases it might be overused. In this article I will describe some partitioning concepts and provide a few examples for those. All parameters used for configuring the deployment in this example can be found in ApplicationManifest.xml file of your Service Fabric solution as shown in the picture below.



    Partitioning of Stateless Services
    One of the service types used in Service Fabric is a so called stateless service. Such services do not persist any state locally by default. For example this might be a service which is returning the current time. If you want to run multiple instances of this type of service, you have a few choices.
    When the new service is added to the project, it will be configured to use a so called SingletonPartition. That basically means a single partition is used, which translates to “partitioning is not used”. This is commonly used by most services, which do not relay on the Service Fabric partitioning concept. This is the case when you want to run your web server in Service Fabric Cluster for example. This is a valid scenario for Web Servers or Web Services, which need to access low level machine features. In contrast to Azure Web Sites, PaaS vNext services (Services hosted in Service Fabric) can run On-Premise and in the Cloud, without of the need to change any code. With the current version of Worker-Roles and Web-Roles, we are not able to fully reuse code between the cloud and On-Premise environment.

    The following code snippet shows the configuration from the ApplicationManifest.xml file, which is automatically created, when one service is added to the solution.

      <Service Name="Stateless1">
          <StatelessService ServiceTypeName="Stateless1Type"
            <SingletonPartition />

    As already mentioned, SingletonPartition defines that no partitioning should be used.
    Additionally, it is important to set a number of nodes to which the partition will be deployed. Commonly this is exactly the place where things get complicated. As an example, let us look at a SingletonPartition (number of partitions 1) with InstanceCount = 2. After the deployment of the service you can start Service Fabric Explorer and navigate to the deployment. You will likely get following result. 

    A Partition is a logical artefact, which gets its physical representation by associating the deployment with the node. In this context you should understand the deployment as a unit of code and state. In a case of a stateless services, state does not exist, which means that the deployment contains the code only.  Deployment is defined as the physically running instance of the service. If the service has state (stateful), the instance will have an associated instance of state.
    When the service from the example above is started, the entry point RunAsync() will be invoked twice. Once on the Node.3 and once on the node.5 For this scenario Service Fabric will start two processes “stateless1.exe” (one on each node). When you work with common services, which do not Service Fabric features avoid using partitions and use SingletonPartition approach. In this case, InstanceCount helps you to define the number of physical nodes, where your application (i.e. your Web Service) should be deployed.

    If you really want to see how partitioning works under the hub, you can play with the next example. This is an example which shows the deployment of the same stateless service with 4 partitions on 2 nodes each (instanceCount=2). This means that every partition will be span across two nodes. For stateless services this seems to be weird. And it is weird. The partitioning concept has a goal to partition data sharing across nodes. In the case of a stateless services there is no data. This is why partitioning makes more sense when working with stateful services. But for now, we still stay in the context of stateless services to get a better understanding of the platform.

    When one partition is spanned across two nodes it does not mean that every partition will be spanned across the exact same two nodes. This can be illustrated with following example. For this example, I used the NamedPartition scheme.
    <Service Name="Stateless1">
          <StatelessService ServiceTypeName="Stateless1Type"
              <Partition Name="one" />
              <Partition Name="two" />
              <Partition Name="three" />
              <Partition Name="four" />
    After deployment in two partitions on two instances, you might see this result in Service Fabric Explorer:

    Because my cluster consists of 5 machines (discussing this is out of scope of this article), the Service Fabric framework has decided to share my 4 partitions across 3 nodes: 1, 2 and 4. You could open Task Manager now and would notice 3 processes. I also tried different deployments with the same configuration. Sometimes I got a deployment of 4 partitions across 4 nodes (not just 3 as shown in task manager below). The algorithm which performs partitioning is currently undocumented and it is being improved over time. 

    When you start your service, the entry point RunAsync() would now be invoked 8 times. Once for every partition at every single node. Because 4 partitions are deployed on two instances (nodes) each, we will have 8 instances of the service.

    But, do not be surprised if you in the future might get slightly different results regarding sharing of the service between the nodes. Remember that a few facts will not be changed. This is the number of instances of the service. If you have 8 instances, your service will be entered 8 times independent on the number of processes running.

    Now, let’s change the configuration slightyly to use the uniform partitioning scheme instead of named partitioning from the previous example. All other parameters remain unchanged, which means 4 partitions across two instances each.
     <Service Name="Stateless1">
          <StatelessService ServiceTypeName="Stateless1Type"
            <UniformInt64Partition PartitionCount="4" LowKey="1" HighKey="10" />
    As result 4 partitions will be span across all 5 nodes in cluster:


    In Service Explorer, is reflected in the next picture:


    Now we have 8 partitions, but this time shared across all 5 nodes in the cluster. As you can see partitioning is a very weird, but powerful concept. But be careful, when using it if you don’t need it. If you host commonly used types of services like Web Servers or similar, most likely you will not require partitioning. In such cases you can achieve an easier deployment by using SingletonPartition in combination with the instance count property, enabling you to exactly define, how many physical nodes your service are running.

    Partitioning of Stateful Services
    The true focus of Service Fabric is dealing with stateful services. Service Fabric helps by solving many complex problems. One of the major approaches of Service Fabric is a conceptual move of data from the database to the code. By implementing stateful services instead of stateless-services the code keeps its data in the cache without of a need to talk to the database. Moving of data from a database to code and vice versa takes some time, which depends on the amount of data which is moved. The relation of time and the amount of data  in cloud computing is sometimes called “data gravity”. The concept of stateful services in combination with replicas helps reducing data gravity to almost zero.

     In this context we have to define the term ‘Replica’. A replica is by definition a copy of code and data (state) of a service.

    In contrast to stateless services, when working with stateful services we rather deal with replicas than with instance counts. In the following example, we will us a single partition (PartitionCount = 1) and 3 replicas. That means our code will be deployed on 3 instances (nodes) and all data will be copied between these 3 instances, every instance has the full amount of data.

    <Service Name="Stateful1">
          <StatefulService ServiceTypeName="Stateful1Type"
    <UniformInt64Partition PartitionCount="1"
                           LowKey="-9223372036854775808" HighKey="9223372036854775807" />
    Here is the result of deployment in Service Fabric Explorer. 

    As you see, the service was deployed 3 times, which correspond to number of the target replicas. This means, we have one instance of the service running and two copies of it, which are not active. They are installed and will be activated in a case of a failure. This is why such services are also called “Reliable Services”. However, in “real life” we should start partitioning of the data. To demonstrate this, let’s use the same configuration but increase the PartitionCount to 4.
    Service Name="Stateful1">
          <StatefulService ServiceTypeName="Stateful1Type"
    <UniformInt64Partition PartitionCount="4"
                           LowKey="-9223372036854775808" HighKey="9223372036854775807" />
    This configuration instructs Service Fabric to deploy and start 12 instances of the service. Remember, we deploy 4 partitions with 3 replicas. However only 4 instances called primary replica (corresponding to the number of partitions) will be visible to you in the running system. All other deployed instances (passive instances) will not be accessible. If you set a breakpoint in the RunAsync() method of the service and  press ‘F5’ your service will stop 4 times.
    On the beginning of my RunAync() method I implemented the following code:
      protected override async Task RunAsync(CancellationToken cancellationToken)
                Debug.WriteLine(">>> P:{0}, R:{1}",
                // TODO: Replace the following with your own logic.
                var myDictionary = await   
                this.StateManager.GetOrAddAsync<IReliableDictionary<string, long>>
    . . .
    As a result of the execution of this code you will get following:

    This result means that our service has been started (visibly to the system) 4 times. Every started instance is running in the same primary replica with id 130884527004591572. This is illustrated in the following picture.

    If you compare the picture above with the output result, you will notice that the code is running in the primary replica only. Last but not least, don’t wonder if you see 5 instances of the service in task manager. 

    To recap this weird story, we have to distinguish between stateless and stateful services. The partitioning concept is designed with the focus on partitioning the service state across physical nodes. If your service does not have a state (stateless) you should probably use SingletonPartition, which basically means “Don’t use partitioning”. In the first part of this article I have shown, what happens if you use partitions in conjunction with a stateless service.

    When working with stateless services we have to deal with partitions and replicas. Every partition defines a replica, which is deployed to some physical node. The replica in the context of a partition is a physical deployment of code and part of the data of the service state. Every replica has a number of copies (target replica set size) of code and the same part of data. One of the replicas is the primary replica and others are secondary replicas. Only the primary replica hosts code which is actually active. Code deployed on the secondary replicas is in a waiting state for a failover scenario. Every partition has the same code, but a different subset of data (state). If we have 4 partitions, there are 4 instances of the service with the same code running on the primary replica and every partition holds a different portion of the data (state). Instances on the secondary replicas are not active. They are just deployed for the failover case.

    About the author

    Damir is co-founder, managing director and lead architect of DAENET Corporation, which is a long term Microsoft Gold Certified Partner and leading technology integrator specialized in Microsoft technologies with strong focus on Windows, .NET and Web. He was the DAENET winner of worldwide Technology Innovation Award for 2005, German innovation award for SOA in 2008 and Microsoft Partner of the year 2010.


older | 1 | .... | 35 | 36 | (Page 37) | 38 | 39 | 40 | newer