Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Independent Experts. Real World Answers.

older | 1 | .... | 26 | 27 | (Page 28) | 29 | 30 | .... | 40 | newer

    0 0

     

    Editor's note:  The following post was written by MVP Award Program Manager Kerry Herger

    It’s November and we’re mere days away from the 2013 MVP Global Summit.  Nearly 1,500 MVPs from around the globe will gather on the Microsoft campus to participate nearly 600 technical sessions with hundreds of members of Microsoft product teams.

    The sheer number of participants—and the amount of content shared—makes this the largest community event in the world. But that’s only part of what sets this annual gathering apart. For over 20 years, Microsoft teams have sat down with these technology and community leaders to provide MVPs visibility into early stage products and new releases giving MVPs the information they need to look forward in their work with community and to gain valuable community feedback to help make our products better.

    “It’s one thing to hear opinions and ideas through email,” says Microsoft CRM Senior Program Manager Amy Van Hollebeke.  “Sometimes you just get it completely differently when you’re sitting in a room together at the Summit and MVPs are able to really explain how a feature can be improved.”

    Most MVPs have numerous communications with Microsoft product teams throughout their award year, but the hands-on experience and deep exchange of ideas at the Summit is something most members of the community—including Microsoft’s product teams—look forward to all year. To lay the groundwork for this relationship, all MVPs sign a non-disclosure agreement assuring they will not share any confidential information they may gain from Microsoft. Throughout the MVP Global Summit, MVPs hear the refrain, “Don’t forget your NDA!” since it’s essential to the dynamic exchange of ideas between the MVP community and Microsoft. In case MVPs need a reminder, Orrin the NDA Octopus has a message for them.


    0 0

    Editor's note: The following post was written by Visual C# MVP Peter Bromberg

    The Visual Studio Code Map feature is designed to help developers visualize relationships in code. The need for code maps will become clear when you are writing or debugging code and need to understand code relationships, particularly when getting involved in a new or unfamiliar codebase.

    Channel 9 has an excellent streaming video that covers Code Maps here: http://channel9.msdn.com/Series/Visual-Studio-2012-Premium-and-Ultimate-Overview/Visual-Studio-Ultimate-2012-Understanding-complex-code-with-Code-Map-ENU

    To start out with Code Maps, I chose to index Dapper .Net 4.5, a popular micro-ORM developed by the folks at StackOverflow.com. Simply right-click on the project in Solution Explorer, and choose “Show on Code Map.”

    This is the main view of the map from Dapper:

    You can see that relationships between the classes become instantly clear. This can be very useful, for example, when first getting into a new codebase to get a “bird’s eye” view of how everything is related.

    Below, I have expanded my utility helper class, SqlMapperUtil:

     

    All of the methods and their interrelationships become available. When you click on a particular method a detail view including the code is shown:

     

     

    When I execute my test program in Debug mode, I can click the “CodeMap” Icon right under the Main menu “help” item, and I can have a synchronized debug session with my code and the code map together:

    Before this feature, we would step though our code and hope to remember how we got to a particular class and method. Now we can turn on the Code Map feature while debugging and see a synchronized view of what is happening. There are also context menus available that allow you to perform common tasks:

     

    Of course, we still have the Dependency Graph feature from the ARCHITECTURE menu, which will generate a dependency graph of our entire solution:

     

     

    The enhanced Code Map feature in Visual Studio 2013 (Ultimate) is a welcome new addition for developers. As with many advanced features, we don’t discover it’s usefulness until we actually start to use it.

    About the author


    Peter Bromberg is a Microsoft C# MVP since 2004, author, and UnEducator based in the Orlando Florida area. He writes mostly about .NET, formerly for eggheadcafe.com and currently at his programming blog at peterbromberg.net. Peter is an ex-jazz musician and stockbroker who dreams of being a philanthropist. Research in microeconomics has given Peter an avid interest in the study of economic and climate change trends and the design of computer algorithms to process historical data sets utilizing neural networks, including some pioneering work with historical solar output data.  As a computer programmer with over 25 years of hands-on experience in designing and developing large-scale enterprise applications, he am an expert at solving business problems. In addition to blogging,Peter has participated in Code Camps and has given presentations at local .NET User Group meetings. His motto is a quote from Thelonius Monk: “Wrong is Right”.


    0 0

    Editor’s note: The following post was written by Dynamics CRM MVP Donna Edwards

    Microsoft Dynamics CRM 2013 - Business Process Flow 

    In this article, I will explore the new Business Process Flow (BPF) feature and provide some examples of how you can customize it to fit an organization’s need. 

    Dynamics CRM 2013, BPFs have many beneficial qualities.  They are available for out-of-the-box (OOB) and custom entities, can span across multiple entities, you can create multiple BPFs for a particular business process, can be role based, can be included in solutions for export and import, and work in the browser, Outlook Client and Tablets. 

    A key purpose of a BPF is to guide users through a specific business process to completion.  A BPF is useful in several ways to include but not limited to:

    • Highly configurable to meet the needs of any Organization
    • Ensure that a set of steps are completed for a specified business process
    • Jump stages if needed
    • Track and report on progress
    • Support the establishment of efficient, effective and repeatable processes
    • Guide existing and new employees through the process
    • Support quick, on-boarding of new hires

    Microsoft Dynamics CRM 2013 comes with three OOB BPF’s; two are related to sales and one to service.  Let’s begin by briefly exploring the out-of-the-box Phone to Case BPF. 

    To view the OOB BPF’s, select Settings and Processes from the top navigation menu.

     

    Select All Processes and open the Phone to Case BPF

     

    Before proceeding, let’s make a copy of the BPF by selecting the Save As button from the top navigation menu.

     

    After making the selection, a new BPF window will open with a newly created BPF named Phone to Case Process (Copy).  The new process will be in draft or inactive status.  We can change the name of the process by selecting the Expand / Collapse chevron from the right side of the process window.

     

    Let’s change the Process Name to “Case Resolution” by replacing “Phone to Case Process (Copy)” with the new name.  Select Save from the top navigation menu and then select the Collapse chevron. 

    When viewing the Case Resolution Process BPF, you will notice that there are three stages; Identify, Research, Resolve.  Each one of the stages contains multiple steps.  For example, the Identify Stage includes two Steps; Find a Customer, Find a Case.

     

    Let’s take a look at what the Identify Stage looks like on a new Case record.  To create a new case record, select Service and Cases from the top navigation menu.  Select the Identify process stage from the newly created Case to view the steps:

     

    From here you can complete a lookup for the Customer and Find any similar cases.  Completing those two steps, completes the Identify stage.

    Next, let’s make some changes to the new BPF we created.

    The first change we’ll complete is to make all of the steps in the Identify, Research and Resolve stages required with the exception of Assign to Others in the Research stage.  To do this, select the Required option on each of the Steps.

     

    Next we are going to add a new Stage and name it Propose Solution.  You can add a new Stage by selecting the plus button next to the word Stages.

     

    Once selected, a new Stage will be added to the bottom of the Stage list. Let’s update the name to Propose Solution, select Propose for the Stage Category, update Steps to display “Suggest a Solution”, select “Activities Complete” for the Field Value, and mark the Step as required.

     

    The next change I want to make is to move the Propose Solution above the Resolve stage.  I can easily make that change by selecting the Move Up option from the bottom of the process form.

     

    Once that is complete, our new BPF displays the Stages in the required order with resolve being the final stage in the process.

     

    After completing the changes desired to the BPF, there are a few additional steps required to make the new process available for use. 

    The first step step we’ll take is to enable security roles.  Select the Enable Security Roles button from the top menu.

     

     

    Select the Enable for Everyone option and select OK. 

     

    Now let’s change the order of the BPFs so our new BPF displays by default.  Select the Order Process Flow button.

     

    Select the Case Resolution BPF from the list, select the green, Move Up arrow and select OK.

     

    The final step is to Activate the process by selecting the Active button from the top menu.

     

    The new BPF will apply to all newly created Cases,  by default, since we moved it to the top of the list. 

    If you choose, you can apply the new Case Resolution BPF to existing Case records by opening a Case record, select the More button from the command bar, select the Switch Process from the drop down list and select the Case Resolution BPF from the list.

     

    Be sure to test your new BPF by creating a new Case to ensure everything is working as expected.  You can make any additional changes needed by simply opening the BPF record.

    I hope you enjoy the Microsoft Dynamics CRM 2013 Business Process Flow feature.   Be sure to share your favorite BPF features, tips and tricks with us. 

    Cheers

     About the author

     

    Donna has been working with the Dynamics CRM application beginning with the 1.2 version to current.  She partners with all levels of an organization to develop and deliver flexible, scalable solutions that simultaneously address short-term business requirements and long-term strategic growth objectives.  Her skillset includes: Pre-Sales Support • Solution Architect • Functional Consulting • Client Relations • Requirements Definition & Analysis • Business Process Engineering • Process Improvement & Automation • Operational Streamlining.  She is very active in the Dynamics Community, answering in the MSDN Forums and an avid blogger.  Follow her on Twitter.

     

    About MVP Monday

     The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.


    0 0

    Editor’s Note: In partnership with Microsoft Press, now celebrating their 30th year, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from Client Development MVP John Garlandwhich is the 33rd in the series.

    New in Visual Studio 2013 – Windows Azure Mobile Services Integration

    Windows Azure Mobile Services (aka Mobile Services) is a Windows Azure offering aimed at providing client apps with turnkey access to services that address many of the typical backend needs encountered by mobile applications.  These include access to structured data storage, the ability to define server-side business logic that potentially includes integration with 3rd party services, user authentication and authorization, integrated support for push notifications, and support for scheduled script execution.  The platform offers several scalability and pricing options which allow a service instance to either grow or contract to meet the needs of the client apps that make use of it, and like most parts of the Windows Azure ecosystem, the Mobile Services feature set is actively being evolved to include more features that address even more common application scenarios.

    In the following sections, I’ll go over how Visual Studio 2013 provides built-in support for that not only simplifies configuring your .NET/XAML, JavaScript/HTML5, or C++/XAML Windows Store app projects to work with Mobile Services, but also provides tools to work with your Mobile Services content directly from within the Visual Studio 2013 IDE.  In case you’re wondering, this support isn’t limited to the paid Visual Studio SKUs – everything that will be discussed is also available in Visual Studio Express 2013 for Windows.

    NOTE: To work with Windows Azure Mobile Services, you will need at least one active Windows Azure subscription.  Information about getting started with a free trial account can be found here, and there are also several offers available to participants in the MSDN, BizSpark, and Microsoft Partner Network programs.  Also, in order to include Push Notification support in your app, your app will need to be associated with the Windows Store which requires that you have access to a Windows Store Developer account, which you can learn about here.

    Getting Started - Connecting Your Windows Store Apps to Mobile Services

    Visual Studio 2013 introduces the new Connected Service Manager which provides a central place where you can go to make connections between your project in Visual Studio and online services that it can use – for now these include the Microsoft Ad pubCenter and Windows Azure Mobile Services.  To access the Services Manager, select Add Connected Service… from the Project menu or select Add and then Connected Service… from the context menu for your project file in the Solution Explorer.  This will bring up the Services Manager dialog in which you can find a Mobile Services entry under Windows Azure.  This entry will include all of the existing Mobile Service instances for the Windows Azure subscriptions that you’ve configured Visual Studio to access.  If Visual Studio has not yet been configured to interact with the Windows Azure subscription you want to work with, you can use the Import subscriptions… link to optionally retrieve and read a Windows Azure subscription file to include that subscription.  You can also use the Server Explorer for more fine-grained control of the service configurations – I’ll talk more about the Server Explorer shortly.  Likewise, if you haven’t yet created the Mobile Services instance you want to work with, you can select the Create service… link to bring up the Create Mobile Service dialog from which you can provision a new Mobile Service instance.  To continue reading the full article click here

     

    John is a Senior Consultant at Wintellect and a Microsoft Client Development MVP, and has been developing software professionally for the past 15 years.  He is the author of the book "Windows Store Apps Succinctly”, and his work has been featured at Microsoft conference keynotes and sessions.  Prior to consulting, he spent much of his career working on high-performance video and statistical analysis tools for premier sports teams, with an emphasis on the NFL, the NBA, and Division 1 NCAA Football and Basketball.  John lives in New Hampshire with his wife and daughter, where he is an active participant in the New England development community.  John is a Windows Azure Insider and also a member of the Azure Mobile Services A


    0 0

    Editor's Note:  The following post was written by Windows Phone Development MVP Matteo Pagani

    This year it will be my third summit and I already know it will be a gorgeous experience, like the previous times. Content, insights and future plans that are shared by your product team are, for sure, one of the main reasons to attend the summit but, what makes it an extraordinary experience, is getting in touch with the people: networking is the key, both with the product team members and with your MVP friends.

    So, my first tip is: don’t be shy! The MVP summit is the perfect chance to know for real people that was just a mail address or a Twitter handle: you’re part of a global community, so the best way to enjoy the summit is to go out from the inner circle of your local peers and know people from all over the world! It will help make the difference in the future and, why not try to find new and important friends! An example? In October me and some other MVP friends from all around the world organized a global initiative called,  Windows Phone Week.  it was a crazy idea to host a Windows Phone event in 3 countries that suddenly exploded and became a global event in more than 20 countries. It would have never be possible if I didn’t have the chance to meet, in person, the other MVPs and the Microsoft’s product team that supported our idea during the MVP Summit in February 2013.

    For the same reason, I suggest you don't skip any networking events that have been organized by the team: MVP Showcase, Welcome Reception, Product Group Evening event and Attendee Party. They are all great chances to engage with your MVP friends! If you want to spend some time as a tourist, it’s better to save some time before or after the MVP summit, not during it.

    My second tip is to plan your schedule carefully: activities usually start early in the morning and, especially if you have to move to the campus in Redmond, you have to plan enough time to have breakfast, to travel to the campus with the available shuttles and, once there, to find the exact building where your activity will be held. So, set your alarm at least one hour and a half before the beginning of the activities: being late is not an option!

    My third and last tip is to take some time and arrive in Seattle some days before the beginning of the summit: most of all, because it will help you to adapt to the new time zone, especially if, like me, you come from Europe. This way, you can avoid falling asleep in the middle of an interesting presentation.  Moreover, you will have a chance to pay a visit to Seattle, which offers many nice places, like The Space Needle, the Sci-Fi museum or Pike Market. And if you’re a geek like me, I strongly suggest you pay a visit to the comic store under Pike Market: you will find a world of comics, gadgets from movies, TV series and cartoons like I’ve never seen anywhere else. In addition, since you’re an MVP and probably a tech addict too, it's worth it to pay a visit to the big tech shops around Bellevue, like the Microsoft Store, Best Buy or Fry’s. The only downside? Get ready to hear your credit card crying!

    About the author

    photo

    Matteo Pagani is a developer with a strong passion about mobile development, focused on the Windows Phone platform and Microsoft technologies. He graduated in computer science in 2007 and, right after, he started his first job experience as a web developer. In subsequent years he started to show great interest in mobile development and, especially, in the Windows Phone platform: a passion he started to share with the communities by opening two blogs (in Italian and in English), where he regularly wrote articles and technical posts about his experience with the Windows Phone and Windows 8 platforms. He is a regular writer for many technical websites and he wrote the first Italian book about Windows Phone 8 development, published by *** Editore. He is also a speaker and he joined many communities and official conferences like WhyMCA, WPC, Community Days, etc. He is a member of the Italian community DotNetLombardia too. Since January 2011 he’s a Microsoft MVP in the Windows Phone Development category and, since October 2012, he’s been awarded as Nokia Developer Champion. Actually, he works in Funambol as a developer in the Windows team, where he works on Windows Phone and Windows 8 projects.


    0 0

    Hi, my name is Michael B. McLaughlin and I’m a Visual C++ MVP. I’d like to introduce you to several graphics and game development additions and improvements in Visual Studio 2013.

    Remote Graphics Diagnostics

    Graphics debugging on remote machines through Graphics Diagnostics is a feature that I’m sure many graphics and game developers have been yearning for (I know that I have). In Visual Studio 2012 you could remotely debug an application’s CPU code using the Remote Tools for Visual Studio 2012. If you wanted to remotely debug graphics, though, then you could only do so programmatically[i].

    In Visual Studio 2013 with Windows 8.1 you can now use Graphics Diagnostics to debug apps on remote machines. To do so, first make sure that the appropriate version of Remote Tools for Visual Studio 2013 is installed on the remote computer. Next start the Visual Studio Remote Debugging Monitor on the remote computer (make sure to configure the firewall settings if the settings dialog appears). Then on your development computer, configure your project for remote debugging and then start Graphics Diagnostics. The first time you run a remote graphics debugging session, you will likely get a prompt on the desktop of the remote machine asking you to configure Windows Firewall to allow remote graphics debugging.  Configure the firewall settings and switch focus back to the game or app that launched. Then proceed to use graphics debugging on the remote machine just the same as you would use it on your development computer.

    Using Graphics Diagnostics on a remote machine does require local access to the machine (it doesn’t run over the Internet and you or someone working with you needs to be there to interact with the program to capture frames). But thanks to the ability to change the playback machine that the graphics code runs on when you debug it, you don’t need constant access to the computer that is exhibiting the problem. If the code runs fine on another machine when played back, it’s possible that the computer that is displaying the problem has a bug in its graphics driver which updating to the latest driver could resolve. It’s also possible that there is a bug in the graphics code but that the computers you had previously been testing the code on still managed to render the scene correctly. Meticulous debugging of the rendering of a captured frame (examining the assets, looking at pixel histories, stepping through shader execution, etc.) will help you isolate and resolve these sorts of insidious bugs.

    Graphics Diagnostics in Visual Studio Express 2013 for Windows

    Previously if you wanted to make use of graphics debugging in Visual Studio, you needed to have a non-Express version (e.g. Visual Studio Professional). In other words, the free versions didn’t have it.[ii] Many developers expressed a desire to have this functionality in Visual Studio Express versions. Microsoft heard us (yes, I was one of those developers) and have responded by including Graphics Diagnostics in Visual Studio Express 2013 for Windows.[iii] This is a great development and I applaud Microsoft and all the folks working on graphics development in Visual Studio for making it happen. It’s nice to know that developers creating Windows Store games and apps with Visual Studio Express can now make use of Graphics Diagnostics to debug their graphics code. To learn more about how to use Graphics Diagnostics, have a look at the Graphics Diagnostics examples on MSDN.

    Compute shader debugging

    Visual Studio 2013 also now includes compute shader debugging in Graphics Diagnostics. Now you can debug compute shaders in your games and graphics applications in the same way you can debug other shader types. For an overview of debugging a compute shader, see this walkthrough.

    As developers, one of the harder decisions we make is whether or not to support a technology that only some users will benefit from. This problem strikes us both with newer and older technologies. Does it make business sense to spend time and money supporting a new technology that only a small percentage of computers have? How about supporting computers that are 5-7 years old (or older) and require special workarounds or even completely different programming paths? While there are right answers to these questions, there aren’t any universally right answers; what is good for one project could easily be disastrous for another. So to me, it’s nice when a technology becomes widespread enough that it’s no longer in either of those grey areas.

    Over the past few years, most new computers have included support for Direct Compute (and GPGPU in general). This has created a large enough number of computers which support compute shaders that it now generally makes sense to use this technology when it makes sense and is available. Harnessing the power of the GPU to carry out massively parallel computations (which is what compute shaders and technologies like C++ AMP do) still only makes sense if your program can make productive use of such computations. But it’s no longer a bleeding edge technology. It’s not available on every system but it is nonetheless mainstream. So it makes a lot of sense to learn Direct Compute if you haven’t already. Thanks to the addition of compute shader debugging in Visual Studio 2013, it’s easier than ever to use it to create great user experiences in your games and apps.

    Improvements to the Visual Studio Image Editor

    Visual Studio 2012 introduced the Visual Studio Image Editor, a modern image editor with support for widely used image formats and many common image editing operations. It also happens to be a great way to convert images to DDS format so that they can make use of a block compression format like BC1 and BC3. This shrinks both the download size and in-app memory usage of your image assets without, in most cases, a noticeable loss in quality.

    In Visual Studio 2013 the Image Editor is even better. The image size aspect ratio is now lockable and is locked by default. This makes resizing images much friendlier. Also, the image properties of an image with mipmaps now shows you exactly which mip level you are working with and even lets you change it with a drop down list (in addition to the plus and minus buttons from Visual Studio 2012). And internally the Image Editor now uses premultiplied alpha. Premultiplied alpha is a really great thing since it generates correct results when images with transparency go through a filtering operation (such as rescaling, or mipmap generation) or are converted to a BC format. It avoids the creation of so-called dark halos or fringes around images. So it’s nice to see that the Visual Studio Image Editor is now using premultiplied alpha.

    Resource Packs in Windows Store Games

    Windows 8 and Visual Studio 2012 let Windows Store developers use certain naming conventions to transparently load resources for different languages and scaling factors without any special code paths. Windows 8.1 and Visual Studio 2013 take this idea a step further by allowing you to create resource packs for each major DirectX feature level (9, 10, and 11). This makes it possible to include images compressed with BC6H and BC7 for users with DirectX 11 GPUs without making users with DirectX 9 or 10 GPUs who can’t use them spend time downloading them and waste storage space on them. Indeed, you can provide feature level appropriate versions of any kind of file-based resource (e.g. images, shader files, and 3D models) such that you can enable users with high-end graphics cards to get the extra benefits their GPU provides while users with power conscious devices or older devices can still enjoy your game or app.

    I definitely recommend that you read up on how to create and use resource packs in your Windows Store games and apps. It makes taking advantage of higher powered hardware while retaining support for all types of graphics hardware much easier.

    In closing

    Visual Studio 2013 provides many great new features and improvements for graphics and game developers. In addition to the things I outlined above, there are also improvements to the shader designer and the 3D model editor (and probably several more things I forgot). I think you’ll be pleased with Visual Studio 2013. I know I am.

    About the author 


    Michael B. McLaughlin is a Visual C++ MVP and the proprietor of Bob Taco Industries, a micro-ISV and consulting firm. He was previously an XNA/DirectX MVP and is a retired lawyer. His website is bobtacoindustries.com and his Twitter handle is @mikebmcl .




    [i] For those who are wondering, you can still use the programmatic interface for remote graphics debugging. The programmatic interface does not support using Direct3D 11.2 interfaces at this time so if you are using some of the new features like tiled resources or the Trim API then you will need to use Graphics Diagnostics for graphics debugging.

    [ii] Students and startups should look in to DreamSpark and BizSpark, respectively. Through them, Microsoft provides software (such as Visual Studio Pro, Premium, and Ultimate) and support for free along with many other free resources designed to help young developers and new companies.

    [iii] At this time Graphics Diagnostics is not included in Visual Studio 2013 Express for Windows Desktop. I hope that it will be incorporated into in the future.

     


    0 0

    Editor's note:  The following post was written by ASP.NET MVP Jeffrey T. Fritz 

    Since the release of the ASP.NET MVC framework in 2009, web programming with .NET became something of a guessing game. A choice would need to be made at the beginning of any project – should the project be built with web forms or MVC? Developers were forced into a question akin to: “What toppings do you want on your pizza?” And really, what developer isn’t going to change their mind halfway into a slice and think something like: “I would have really like to have had some green peppers on this…” ASP.NET should not be like ordering a pizza, where you can’t change your mind one place your order. It should be more like a buffet: I’d like a little web forms with some MVC controllers and a WebAPI here and there. Microsoft has finally brought these components all together, and the result is One ASP.NET.

    One ASP.NET Project Type

    Starting with Visual Studio 2013, the guessing game about which project type to choose is over. There is now only one web project type in Visual Studio.

     

    FIGURE 1 - ONE ASP.NET PROJECT TYPE

    In this model, we have features and functionality that allow us to utilize the entire ASP.NET toolbox that is available to us. When you begin working with this project type, you will be prompted with additional configuration options to begin working with your project:

     

    FIGURE 2 - PROJECT TEMPLATE CONFIGURATION

    As you can see from the list of templates, we can choose to start with a standard Web Forms, MVC, or Web API project type. The other project types from the old MVC template dialog are still here. The interesting part is the checkboxes underneath the list of templates. Here, we can choose to add Web Forms references and folders to an MVC project, or MVC references and folders to a Web Forms project. This is the gateway to using more of these features in concert with each other in your project.

    Security Configuration Made Simple

    One of the trickier parts of ASP.NET is the configuration of authentication capabilities for your project. In previous versions, you needed a good understanding of the syntax of your web.config file in order to configure options properly. If you misconfigured something, there was no immediate feedback from Visual Studio. You only learned of a misconfiguration when debugging your application and a white-screen error would be presented indicating your error.

    Starting with the One ASP.NET configuration dialog in this version of Visual Studio, we have a configuration wizard that will walk you through the security configuration and output the correct settings for you in your project files.

     

    FIGURE 3 - AUTHENTICATION CONFIGURATION SCREEN

    This wizard has steps configured to assist in configuring even the most complex of authentication scenarios – Single Sign On using Active Directory from a foreign domain. These first steps that the new Visual Studio provides will get your project started correctly, and you will be able to build on those steps easily for the life of your application.

    Scaffolding on Steroids

    In previous versions of Visual Studio, the scaffolding capabilities of ASP.NET MVC were a huge benefit. You could point to an Entity Framework context, and collection in that context, and Visual Studio would generate from templates a set of views, a controller, and appropriate actions to complete basic read and write operations. In this new version of the application, we have significant options available to us when we want to scaffold out objects in our ASP.NET application. You can begin to scaffold objects with the new “Add – New Scaffolded Item” menu option:

     

    FIGURE 4 - SOLUTION EXPLORER CONTEXT MENU

     

    This new option remove many of the ugly sub-menus and child menus that were present and confusing to not just developers, but Visual Studio integrators as well. How many options of “Add” items do we really need in this menu before it would become unwieldy and annoying to consume?

    By choosing the “New Scaffolded Item” option, the following window opens, giving a large selection of all of the great templates that you can build on:

     

    FIGURE 5 - SCAFFOLD OPTIONS

    These options in the MVC selection should look familiar, except for one near the bottom: “MVC 5 Dependencies” With this option, the MVC 5 framework can be added to a project that was started without the MVC folder structure and configuration. This option is SIGNIFICANTLY easier than the project file hacking and folder modifications that we formerly had to do in prior versions of Visual Studio.

    The other templates for MVC in this box, lead to similar configuration windows that were present in prior versions of Visual Studio. You will be able to configure the naming for your objects and Entity Framework options appropriately.

    Look for updates to come from Microsoft as this options dialogue is expanded to include new templates and software vendors begin offering their own templates to assist in your application development.

    Summary

    These new project construction, configuration, and scaffolding options are just the tip of the iceberg in the new productivity capabilities for web developers in Visual Studio 2013.  I have written a whitepaper called “10 Game Changing Features in Visual Studio 2013 for the ASP.NET Developer”.  Look for more great feature discussions on my blog and follow me on Twitter (@csharpfritz) to see what else I am sharing to make web developers happier and more productive.

    About the author

    Jeffrey T. Fritz is a Microsoft MVP in ASP.NET and IIS, an ASPInsider, an INETA Community Champion, a Pluralsight author, and Developer Evangelist for Telerik. Jeffrey has over 15 years of experience building large-scale multi-tenant web applications in the software-as-a-service model. He speaks regularly at developer events through the INETA speaker program and at large conferences like DEVIntersection. Recently, Jeffrey published articles about ASP.Net in MSDN Magazine and CodeProject. He is a huge fan of the One ASP.NET project model 



    0 0

    Attending the Microsoft MVP Summit!

    By Magnus Mårtensson, @noopman, Windows Azure MVP

    Is it your first time as a Most Valuable Professional to attend the Microsoft MVP Summit? I am so jealous because you’re in for such a treat! I wish it was my first time again. #nostalgia The best quote I've heard from spending a few Summits at Microsoft comes from an evening event at a pub where the great Clemens Vasters (Windows Azure Service Bus Program Manager) commented to a colleague on how all of us MVPs sitting around the table were, “A few thousand of our closest friends.” That’s such a nice comment and it perfectly frames how I feel about the MVP Summit and how Microsoft receives us when we come to visit. Microsoft knows us and we know Microsoft. It’s like two good old friends who really enjoy each other’s company meeting up and enjoying a few days together.

    Summit is a well-oiled machine with great transportation, receptions, meals and sessions. Just follow the flow and let the organizers and MVP old-timers lead the way. The sessions of course are the official reason we are in Redmond and you will get the latest and the greatest inside scoop on the technologies in your expertise. Personally, the Windows Azure group is my crowd. The content we get at Summit is always very good, organized by our community lead Mark Brown. All of the Program Managers and Marketers come to our session room and deliver NDA news about what’s coming next in the Windows Azure platform. Yeah, that’s right, Non-Disclosure Agreement information is flowing your way! MVPs are literally the only ones outside of Microsoft who get this information and this hospitable treatment. We try to cram as much as we can during session hours into our brains. It is hard work, I tell you, to take in so much great content in such a short time. Do take time to get to know the teams at Microsoft better and become friends with the folks who build your products! For the rest of the year you will be interacting with these connections in your work. Also the people working on the teams want our feedback on the directions of the products and features currently being built. As a consultant I have used these connections almost daily in what I do. My customers know that if they hire me they have a good line into Microsoft and can get the help they need directly from the source. We MVPs are connected to the source all year round but at Summit that connection becomes physical for a week. That’s the opportunity to build valuable professional connections and friendships. Usually on one of the Summit days the schedule is a bit more open and you can cross over between expertise groups and listen to more general sessions on other topics than what you are focused on. Sessions and content will blow your mind as a first time Summiteer.

    Then there is the networking. I can’t even begin to describe this. But I will give it a try! The MVP Summit is packed full with super smart and great tech-colleagues and friends. In the evenings there is usually more than one event to choose from in Bellevue. You have a meal and a drink or two or three. What happens in the Bellevue night stays under NDA too! Network, network, network! Hang out and chat, strike up a conversation in the halls during the day and in the bars at night. First question is always “What’s your expertise?” All of us experts come together and we really thrive on learning and sharing. We MVPs put the ‘care’ into ‘sharing is caring’. Summit is a huge honey pot of knowledge and experience - a continuous fire-hose-draught of learning if you wish! Accompanied by an actual draught. You will, quite exhausted every evening, crawl into bed and with a happy grin on your face, drift away into a few hours of sleep. Until the next day when you jump up and do it all over again!

    After a week of this your brain will be pretty well fried. You will journey back home content and immediately begin dreaming about the next Summit. Enjoy your first MVP Summit! I wish you will have many more in the future you fortunate first time Most Valuable Professional. See you in Seattle!

    About the author


    Magnus is a Development and Architecture Consultant and focuses on BIG server solutions and server side architecture. In his work as a consultant he aids ISVs and other customers to take off and be successful in the Cloud as well as assists with project effectiveness and development process streamlining. Magnus is also a passionate international speaker and trainer and the next gig he would love to book is Yours! He also creates conferences and/or assists to create agendas for conference tracks. Naturally Magnus is involved in the local development comunities http://swenug.se and http://azureug.se Get in touch with Magnus at magnus@magnusmartensson.com or find him at: http://magnusmartensson.com/ http://twitter.com/noopman https://www.facebook.com/noopman


    0 0

    Attending the Microsoft MVP Summit!

    By Magnus Mårtensson, @noopman, Windows Azure MVP

    Is it your first time as a Most Valuable Professional to attend the Microsoft MVP Summit? I am so jealous because you’re in for such a treat! I wish it was my first time again. #nostalgia The best quote I've heard from spending a few Summits at Microsoft comes from an evening event at a pub where the great Clemens Vasters (Windows Azure Service Bus Program Manager) commented to a colleague on how all of us MVPs sitting around the table were, “A few thousand of our closest friends.” That’s such a nice comment and it perfectly frames how I feel about the MVP Summit and how Microsoft receives us when we come to visit. Microsoft knows us and we know Microsoft. It’s like two good old friends who really enjoy each other’s company meeting up and enjoying a few days together.

    Summit is a well-oiled machine with great transportation, receptions, meals and sessions. Just follow the flow and let the organizers and MVP old-timers lead the way. The sessions of course are the official reason we are in Redmond and you will get the latest and the greatest inside scoop on the technologies in your expertise. Personally, the Windows Azure group is my crowd. The content we get at Summit is always very good, organized by our community lead Mark Brown. All of the Program Managers and Marketers come to our session room and deliver NDA news about what’s coming next in the Windows Azure platform. Yeah, that’s right, Non-Disclosure Agreement information is flowing your way! MVPs are literally the only ones outside of Microsoft who get this information and this hospitable treatment. We try to cram as much as we can during session hours into our brains. It is hard work, I tell you, to take in so much great content in such a short time. Do take time to get to know the teams at Microsoft better and become friends with the folks who build your products! For the rest of the year you will be interacting with these connections in your work. Also the people working on the teams want our feedback on the directions of the products and features currently being built. As a consultant I have used these connections almost daily in what I do. My customers know that if they hire me they have a good line into Microsoft and can get the help they need directly from the source. We MVPs are connected to the source all year round but at Summit that connection becomes physical for a week. That’s the opportunity to build valuable professional connections and friendships. Usually on one of the Summit days the schedule is a bit more open and you can cross over between expertise groups and listen to more general sessions on other topics than what you are focused on. Sessions and content will blow your mind as a first time Summiteer.

    Then there is the networking. I can’t even begin to describe this. But I will give it a try! The MVP Summit is packed full with super smart and great tech-colleagues and friends. In the evenings there is usually more than one event to choose from in Bellevue. You have a meal and a drink or two or three. What happens in the Bellevue night stays under NDA too! Network, network, network! Hang out and chat, strike up a conversation in the halls during the day and in the bars at night. First question is always “What’s your expertise?” All of us experts come together and we really thrive on learning and sharing. We MVPs put the ‘care’ into ‘sharing is caring’. Summit is a huge honey pot of knowledge and experience - a continuous fire-hose-draught of learning if you wish! Accompanied by an actual draught. You will, quite exhausted every evening, crawl into bed and with a happy grin on your face, drift away into a few hours of sleep. Until the next day when you jump up and do it all over again!

    After a week of this your brain will be pretty well fried. You will journey back home content and immediately begin dreaming about the next Summit. Enjoy your first MVP Summit! I wish you will have many more in the future you fortunate first time Most Valuable Professional. See you in Seattle!

    About the author


    Magnus is a Development and Architecture Consultant and focuses on BIG server solutions and server side architecture. In his work as a consultant he aids ISVs and other customers to take off and be successful in the Cloud as well as assists with project effectiveness and development process streamlining. Magnus is also a passionate international speaker and trainer and the next gig he would love to book is Yours! He also creates conferences and/or assists to create agendas for conference tracks. Naturally Magnus is involved in the local development comunities http://swenug.se and http://azureug.se Get in touch with Magnus at magnus@magnusmartensson.com or find him at: http://magnusmartensson.com/ http://twitter.com/noopman https://www.facebook.com/noopman


    0 0
  • 11/13/13--08:51: Git for TFS 2013
  • Editor's note:  The following post was written by Visual Studio ALM MVP Esteban Garcia

    One of the great new features of TFS 2013 is the addition of Git as a source code repository.  Git is a Distributed Version Control System (DVCS) that has gained a lot of popularity in the past few years.  Git allows you and your team to work completely disconnected by keeping a copy of your source code locally, including all your change history.  By doing this, you are able to commit your changes locally, do file comparisons, create branches, merge your code, and much more.  Once you are ready to share your changes with the rest of the team, you are able to push your changes to the centralized Git repository contained in your TFS Server.

    Git is not replacing Team Foundation Version Control (TFVC) but it does give you another option for you and your team to use.  So starting with TFS 2013 and Team Foundation Service, when you create a new Team Project, you are able to decide what source control repository you will use.

    Implementation

    TFS's Git implementation is based on msysGit so this is not just a small subset of Git functionality.  If you are already used to working with Git, you should be able to get up and running quickly.  Even though the Git commands that you will use are the same as other Git implementations, the backend is very different.  As you may know, Team Foundation Server uses SQL Server to store all of its data, and TFS Git is no exception.  This means that your backup and restore procedures won’t change whether you are using TFVC or Git.  This implementation also means that all the integration points that you have when using TFVC are also available when using Git.  This includes Work Item associations, build integration (Continuous Integration, Gated Builds, Associated Changesets in the Build Summary), alerts, and more.

    Getting Started

    Before you get started, you have to create a Team Project that uses Git for Source Control.  To do so, you follow the same steps as you would normally follow to create a Team Project from Team Explorer, but you will now see a new step called "Specify Source Control Settings" which will allow you to pick between Team Foundation Version Control and Git.  As described in this step: "Git is a Distributed Version Control System (DVCS) that uses a local repository to track and version files.  Changes are shared with other developers by pushing and pulling changes through a remote, shared repository."  

     

    Once you get through the Team Project creation wizard, you will now have a fully-working Team Project with all the great features that you are already used to (and some new ones with 2013), but instead of using TFVC you will now use Git.

    Working with Git from Visual Studio 2013

    When you open Visual Studio and connect to your Team Project, Team Explorer will look and behave differently since it is aware that you are using Git.

    This is what Team Explorer looks like when you connect to a Team Project that uses Git:

     

    Before you can start working with any code stored in TFS, you have to clone the repository (see highlighted link above). This creates the mapping between your local Git repository and your Git repository that is hosted by TFS.  Clicking on the link allows you to specify the location of the server repository and initializes your local repository:

     

    As part of your local repository initialization, you will see a hidden .git folder, and two files used by Git: .gitattributes and .gitignore.

    You can now get started with either a brand new project (in my case I don't yet have anything in TFS) or making changes to existing projects.

    When you are done with your changes, you are able to access similar features than what you would get when working with TFVC, except that the workflow changes.  Git expects you to commit locally at least once and then you would push those changes to the server.  Since your commits don't affect the rest of your team until you push them to the server, you should feel encouraged to commit often since Git makes branching, merging, and rollbacks a very trivial process.

    When you are ready to commit, you are taken to the  "Changes" tab in Team Explorer.  Here, you can enter a comment, select your included changes, and associate your commit to a Work Item:

    You can create associations two different ways.  You can either select the "Add Work Item by ID", the way you are probably used to doing it. Or, as part of your comment, you can enter the Work Item ID prepended by a hash-sign.  For example, in my comment above, I'm associating my commit to Work Item ID 2 by entering my comment like this: "Created new project #2".  One of the reasons for this feature is that in some cases, your team members may need to access your source control from outside of Visual Studio, for example, by using the Git's command-line support, and this allows you to still create associations with your TFS Work Items.

    You can now push your changes to your server by accessing the Unsynced Commits tab in Team Explorer. You can access it from the link in the Changes tab or from your Team Explorer home.  When you get there, you will see a list of all your local commits.  Before pushing your changes to the server, you should submit a Pull request, which will allow you to resolve any conflicts with changes made by the rest of your team.  Once those conflicts are resolved, you can proceed with a Push operation, which will move all your local commits to the centralized repository.
     

     

    Once you push your changes to the server, you are able to view History by selecting the "View History" option from the "Actions" dropdown in the Unsynced Commits tab.  Double-clicking on one of the Changesets in the history window brings up the Commit Details, which shows you my commit comment,  Related Work Items (remember that I used the hashtag to associate), and the files that were affected.

     

    Branching

    One of the most important features of Git is its ability to painlessly create branches and merge between your branches. Since these operations are performed locally, you don't need any special permissions to perform them in your local repository.  Team Explorer gives you an whole section dedicated just to branching:

     

    From here you can create new branches, merge between them, view Unsynced changes in each branch, and publish your branches to your local repository.

    Creating a Branch:

     

    Merging between branches:
     

     

    Build Integration

    Just like when you are using Team Foundation Version Control, you are able to get full integration with the TFS automated build system.  When setting up a Build Definition, you are able to specify your repository name, the Branch that should be used to pull code from, and you are even able to pull code from a Git repository outside of TFS.

     

    The Build Process Arguments are also a little different since you are now dealing with a different structure than you would when connecting to a TFVC repository, but once you configure the build, all build operations work the same as they would in prior versions of TFS.

     This is an awesome new addition to TFS, which will give you and your team another option for source control.  If you have team members working outside of Visual Studio, they are able to connect to Git from their favorite Git plugin and collaborate with the rest of the team.  The greatest part is that all the integrations that we all love about TFS is still there, so you get all the great features of Git while being able to collaborate with the rest of the team using Work Item Tracking, Build Integration, Microsoft Test Manager, and more.  Now go out and give it a try!

    About the author

    Esteban Garcia is a Visual Studio ALM MVP and ALM Ranger. He is the ALM Practice Lead at AgileThought (Microsoft ALM Gold Partner), Pluralsight author, Telerik Insider, and president of ONETUG (Orlando .NET User Group). He has over 15 years of professional experience in software development, working in all phases of enterprise application development and design using Microsoft technologies. Esteban has a BS degree in Computer Engineering from the University of Central Florida. You can follow his blog at http://www.almguide.com and follow him on Twitter at @EstebanFGarcia.


    0 0

    Editor's note:  The following post was written by Visual C# MVP Bill Wagner

    I’m excited to see the expanded support for TypeScript in Visual Studio 2013. It represents several good changes relating to how Microsoft releases developer tools.

    First of all, TypeScxript is nearing its 1.0 release. The current version is 0.9.1.1. There have been important milestones along the way, and TypeScript is maturing well. I’m impressed with what the team has done so far.

    Second, it provides a great example of the de-coupling of various libraries and frameworks that make up the .NET framework. Microsoft did not delay the release of VisualStudio 2013 until TypeScript reached its 1.0 milestone. Instead, the ecosystem now makes it easier to add TypeScript support separately, and update that support as TypeScript progresses to release state. I’ll explain those details a bit later. This decoupling is great. You can see it with other .NET libraries as well: ASP.NET MVC, Entity Framework, Azure tools. They all release on their own schedule. This means each product team can set their own schedules. That gets more features in our hands sooner. We can upgrade individual libraries on our timeframe, rather than wait for a big, bundled release from Microsoft.

    I’ll walk you through creating your first TypeScript project. I’m going to highlight a few other features of newer releases of Visual Studio as I do this. Instead of going to File:New Project, Type “New” in the quick launch bar. One of the items you can pick is File:New Project. Pick that, and scroll to You’ll see a node for “Install TypeScript project templates”.

     

    That’s what I meant about these out of band releases. Visual Studio ships with a template that installs the TypeScript supporting tools and templates. Run that template, and it installs the latest TypeScript templates for you, along with the other TypeScript tools. Once you’ve done that, you can create a new TypeScript project.

     

    The template creates the SPA equivalent of “Hello World”. You’ve got a single TypeScript file, app.ts, that contains the code, and a blank HTML page that loads app.js into the DOM.

    Do a build, and the TYpeScript compiler processes app.ts to produce app.js. TypeScript compiles to JavaScript, using standard idioms that are very portable between different browsers. Run the app, and you can see the hello world message update with the time every few seconds.

    The code shows the basics of TypeScript syntax. At the bottom of app.ts, you can the widnow_onLoad function that’s run when the page starts. It creates a new Greeter object and starts that greeter.

     

    Listing 1: app.ts

    class Greeter {

    element: HTMLElement;

    span: HTMLElement;

    timerToken: number;

     

    constructor(element: HTMLElement) {

    this.element = element;

    this.element.innerHTML += "The time is: ";

    this.span = document.createElement('span');

    this.element.appendChild(this.span);

    this.span.innerText = new Date().toUTCString();

    }

     

    start() {

    this.timerToken = setInterval(() => this.span.innerHTML = new Date().toUTCString(), 500);

    }

     

    stop() {

    clearTimeout(this.timerToken);

    }

     

    }

     

    window.onload = () => {

    var el = document.getElementById('content');

    var greeter = new Greeter(el);

    greeter.start();

    };

    The Greeter class shows the basics of TypeScript’s class support. Greeter is a class. It has private member variables for its container in the page. Its constructor modifies the DOM to add its visual representation. Greeter also contains two methods: start and stop. These follow common JavaScript idioms to update the page using a JavaScript timer object. TypeScript’s integration with JavaScript functionality is very seamless.

    TypeScript’s support includes templates for new TypeScript files. Right click on the project node and select “Add”, then “New TypeScript file”. The current version of the new item template provides some boilerplate that demonstrates common TypeScript features. You can learn more about the language syntax on the TypeScript homepage, http://typescriptlang.org. I’m going to discuss the Visual Studio integration, rather than discuss the syntax. TypeScript adds static type analysis to JavaScript. This enables many developer productivity enhancements. The TypeScript compiler can spot classes of coding errors before you execute the code. Other developer productivity tools include intellisense. Instead of remembering every api method on a type or interface, you can simply type a ‘.’ and scroll through the options.

    Listing 2: example1.ts

    // Interface

    interface IPoint {

    getDist(): number;

    }

     

    // Module

    module Shapes {

     

    // Class

    export class Point implements IPoint {

    // Constructor

    constructor (public x: number, public y: number) { }

     

    // Instance member

    getDist() { return Math.sqrt(this.x * this.x + this.y * this.y); }

     

    // Static member

    static origin = new Point(0, 0);

    }

     

    }

     

    // Local variables

    var p: IPoint = new Shapes.Point(3, 4);

    var dist = p.getDist();

    But TypeScript’s typing rules are not the same as you are probably familiar with from C# or VB.NET. TypeScript uses structural typing. If a type contains the methods supported by an interface contract, it need not explicitly declare that it supports an interface. For example, in the generated template, I can remove the code that the Point class implements the IPoint interface, and it still compiles and runs. The fact that the Point class still contains the <whatever> method means that Point still satisfies the IPoint contract.

    This looser typing improves TypeScript compatibility with existing Javacript libraries, and improves some dynamic scenarios.

    I could only touch on the features of TypeScript here. I’m excited about the possibilities TypeScript enables. I’ll close this with a few final points on the TypeScript ecosystem.

    The TypeScript compiler is written in TypeScript. I’ve seen work and announcements about support on Linux and Macs, including an Eclipse plug in. Visual Studio 2013 has excellent first class support for TypeScript, but TypeScript is not a Microsoft only technology. It embraces open web standards.

    There’s a great github project, DefinitelyTyped (https://github.com/borisyankov/DefinitelyTyped) that provides TypeScript type definitions for a large, and ever growing, list of popular JavaScript libraries. If you are planning to use TypeScript with existing libraries, look here.

    You can learn more about TypeScript at http://typescriptlang.org.  And, now that Visual Studio 2013 is released, watch my blog: http://billwagner.azurewebsites.net/blog. I’ll be starting a lengthy series that explores TypeScript in more depth there.

    About the author


    Bill has spent his entire career in the software industry, spanning both technical and business roles. His technical time is spent between curly braces, primarily with C#. He's the author of the best selling "Effective C#", now in its second edition, and "More Effective C#". His articles have appeared in MSDN Magazine, the C# Developer Center, Visual C++ Developer's Journal, Visual Studio Magazine, ASP.NET Pro, .NET Developer's Journal and more. He's written hundreds of technical articles for software developers. He actively blogs about technical topics at http://billwagner.azurewebsites.net.  


     


    0 0

    Editor's note:  The following post was written by System Center Cloud and Datacenter Management Rodney Buike 

    MVP Summit is almost here!  It seems like only 6 months since the last Summit :)  Having attended, hosted and presented at more conferences than I care to remember there is a common set of tips that relate to all these types of events that will allow you to get the most out of the event.  I thought I’d share some of these ahead of the MVP Summit for first timers and repeat visitors to help them get the most out of the event!

      1.  Get Comfortable – Find a seat, take off your jacket, grab a drink and relax a little.  It’s a busy week with lots of great content, speakers and feedback sessions.  If you are new to the group there are a few ways to break the ice and one of them is taking off your coat.  Nothing says I’m about ready to leave than someone wearing a jacket, of course the swanky MVP jackets are not included in this ;)  If you are not new to the group make sure you do your best to make others feel welcome.  A former colleague used the term “rock piles” to describe a group of people huddling and it’s not overly welcoming.   Open up that circle and make new friends while welcoming your old friends.

      2.  Outside the Box – While we are all deep subject matter experts in an area and MVP Summit is all about gathering those groups together, find some time to attend a session that is outside of your area of expertise.  While most sessions are targeted there are opportunities to sit in on some other sessions and learn a bit about something you may not work with daily but that you are interested in learning more about.

      3.  Go Offline – MVP Summit allows you to connect with other MVPs and the PGs for a short while, face to face, once a year.  OK twice this year but still.  Close the laptop lid, stop hiding behind a screen.  There isn’t a whole lot of anything you can tweet, blog, or post to Facebook anyway.  Emails can wait until the next break, or the evening.  If the email is incredibly urgent quietly take your device into the hallway to respond.  As a speaker, and an attendee, the last thing I want to hear is you pecking away at your keyboard.

      4.  Break the Ice – Say hello, introduce yourself, work the room.  Participate in the conversations, share your thoughts, provide your feedback, learn.  Opportunities like this are rare so make the most of them!

      5.  NDA – Most importantly respect the NDA.  "If it happens at Summit it stays at Summit.” is a cliched approach but the best one.  It allows MS to continue hosting the MVPs and sharing information as well as ensures you won’t be booted from the program :)

     

    Most importantly have fun!  Take in the sights of Bellevue and Seattle, there are some great restaurants in the area, trails for running or hiking, museums, galleries and more.

     

    See you there,

    @Thelazyadm


    0 0

    By Mike Hickman, Director of Global Community Engagement at Microsoft

     

    We’re poised to welcome well over a thousand technical community leaders to Microsoft corporate headquarters in Redmond for the MVP Global Summit, one of the largest and most important community events in the world. This is a special Summit—the second one this year—which we expect will foster great conversations between MVPs and Microsoft product teams during this important product release cycle.

    At the February 2013 MVP Global Summit, we piloted the MVP Showcase, where MVPs from around the world presented their knowledge of the Microsoft platform and the solutions and capabilities they produce to Microsoft teams and the MVP community. It was enormously successful, so it’s back at this month’s Summit—only 30 percent bigger.

    Although many MVPs were here just nine months ago, we’re expecting a great turnout for the November 2103 MVP Global Summit—around 1,400 MVPs are scheduled to attend.

    To welcome them personally, and provide some more information about the event, Annabelle Autin-Perrault, senior director of Community and Online Support at Microsoft, has created this video.

    (Please visit the site to view this video)

     


    0 0

    Editor’s note: The following post was written by Microsoft Integration MVP Damir Dobric

    In the era of devices and service Windows Azure Service Bus will play definitely a more important role. However the huge number of devices which can connect to the bus in a practical and hypothetical scenario might be an issue. Observing the huge number of connections is not an easy task and it can bring any technology to the limit.

    Windows Azure Service Bus is designed to observe a huge number of connections. We know Service Bus provides the solution but in a typical scenario you cannot just simply connect millions of devices to it and expect that it just works. Fortunately for some specific scenarios there are solutions which just work out of the box. The good example is Service Bus Notification Hubs.

    Unfortunately in majority of scenarios, you have to design the system to be able to observer huge number of connections. In this context we usually talk simply about devices, because they will represent the majority of connections in the near future. But, keep in mind, connection can be caused by any kind of computer, application or similar.

    This article should give you an overview about many documented and undocumented details related Service Bus entities and big/huge number of connections (connected devices).

    Service Bus protocols and connection behavior?

    When talking to service bus, clients have independent on technology since SB v2 three ways (transports/protocols) to connect:

    1. Service Bus Messaging Protocol
    2. AMQP
    3. HTTP/REST

    The SBMP is the TCP based protocol which is used inside of .NET client library. This protocol is by default enabled when you use Service Bus .NET SDK. The code-snipped below shows how to create the queue client from connection string and then
    it sends 10 messages to the queue. Finally the code below will start receiving of messages by using message pump. For more information about message pump and other V2 features, take a look on this article. (Note: This sample requires SB 2.0).

    string connStr =

    "Endpoint=sb://….servicebus.windows.net/;SharedSecretIssuer=…;SharedSecretValue=..=";

    var client = QueueClient.CreateFromConnectionString(connStr, qName,

    ReceiveMode.PeekLock);

    for (int i = 0; i < 10; i++)
    {
       client.Send(newBrokeredMessage("i = " + i.ToString() + ", payload = " +
       newRandom().Next().ToString()));
    }


    client.OnMessage((msg) =>
    {
     . . .
    });

     

     

    If you want to change the underlying protocol to AMQP, all you have to do is to slightly change connection string as shown in the following example:

    string connStr = "Endpoint=sb://….servicebus.windows.net/;SharedSecretIssuer=…;SharedSecretValue=..=;TransportType=Amqp";

    var client = QueueClient.CreateFromConnectionString(connStr, qName, ReceiveMode.PeekLock);

    AMQP is a standardized and highly performing protocol supported by many platforms like JAVA. That means by using of SB Client and AMQP you could communicate with other Service Bus systems like IBM MQ. Right now since SB v2 SBMP is highly optimized and seem to be in this context near to AMQP. Bu both of them behave similarly. They both establish the permanent TCP connection to Service Bus.
    In contrast to SBMP and AMQP, HTTP/REST protocol does not establish the permanent connection to Service Bus. HTTP/REST is obviously request/response based and this is a none permanent (decaupled) connection.


    Service Bus Messaging Factory

    When the instance of QueueClient is created, the new instance of MessgingFactory class is created.

     

    MessgaingFactory seems often to be a hidden and unimportant artifact. But it is highly important, because it creates the permanent connection to Service Bus endpoint.That means, if you want to send lot of messages to Service Bus you should probably open multiple connections to increase the performance.
    For example, following code uses a single connection to Service Bus to send and receive the message. This is because a single MessgaingFactory is in play.

    // Creates implicitly the connection.
    MessagingFactory factory = MessagingFactory.CreateFromConnectionString(m_ConnStr);
    var sender = factory.CreateMessageSender("hello/q2"); 

    // Shows how to create the message.
    BrokeredMessage msg = newBrokeredMessage("hello");
    msg.MessageId = Guid.NewGuid().ToString();
     

    // Sends the message.
    sender.Send(msg); 

    //
    // Shows how to receive the message.
    var receiver = factory.CreateMessageReceiver("hello/q2");

     

     

    You can create as many clients and threads as you want from one MessagingFactory, but all of them will use the same connection to service Bus. That means if you write an application with many client-instances (QueueClient, TopicClient etc.) which are running on one underlying TCP connection (created from same MessagingFactory), your throughput will be limited by this connection. If you want to increase throughput, you should create multiple Message Factories and then create clients based on multiple factories.

    Following example will open two TCP connections to SB.

    // First connection.
    MessagingFactory factory1 = MessagingFactory.CreateFromConnectionString(m_ConnStr);
    var sender1 = factory1.CreateMessageSender("hello/q2"); 

    BrokeredMessage msg = newBrokeredMessage("hello");
    msg.MessageId = Guid.NewGuid().ToString(); 

    // Sends the message via first connection.
    sender1.Send(msg); 

    // Second connection. 
    MessagingFactory factory2 = MessagingFactory.CreateFromConnectionString(m_ConnStr);
    var sender2 = factory2.CreateMessageSender("hello/q2"); 

    BrokeredMessage msg = newBrokeredMessage("hello");
    msg.MessageId = Guid.NewGuid().ToString();
     

    // Sends the message via second connection.
    sender2.Send(msg); 

     

     

     

    Sometimes, you will even not deal directly with MessagingFactory. For example, you can create client instance from connection string. No you might ask yourself how many connections (factories) will open following peace code. 

    var client1 = QueueClient.CreateFromConnectionString(m_ConnStr, qName, ReceiveMode.PeekLock);

    var client2 = QueueClient.CreateFromConnectionString(m_ConnStr, qName, ReceiveMode.PeekLock);

     

     

     


    This is an undocumented detail. Right now (SB-V2) it will open a signle MessagingFactory which will be shared by two clients. However this is undocumented and it might change in the future. If you explicitly need control of MessagingFactory, then create it explicitly as shown in examples above. 

    What is a connection link?

    In this context there is also one important and undocumented artifact, which was never officially introduced, because you will never deal with it directly. I call it “connection link”. Here is a definition. One TCP connection can host multiple “connection links”. Each time you create a client like message sender, receiver or queue/topic/subscription client, one “connection link” is created, but this will use one physical connection described above.

    Following picture illustrate this. N senders created on top of a single Messaging Factory will establish N “connection links” through a single TCP connection.

     

    Last but not least, TCP connections are sometimes dropped. If that happen Messaging Factory will automatically reconnect the physical connection. But, don’t wary, all “connection links” of clients based on that connection will automatically be reconnected too. You don’t have to care about this.
    The term “connection link” is a virtual artifact, but it is very important when dealing with Service Bus entity quotas, which are described in next topic.

    Service Bus Entity Quotas

    If you want to learn more about Service Bus limits (quotas), then take a look on this official MSDN article. Following picture is taken from this article. It is in this context an important part. 

    This table contains in fact all you want to know about quotas. But if you don’t know how to deal with TCP connection to Service Bus and if you don’t know the meaning of “connection link”, this table will not help much.

    The number of “subsequent request for additional connections” is in fact the number of “connection links” which can be established to a single Messaging Entity (Queues and Topics). 
    In other words, if you have one Queue/Topic then you can create maximal 100
    MessageSender-s/QueueClients/TopicClients which will send messages. This is the Service Bus quota independent on number of Messaging Factories used behind clients. If you are asking yourself now, why is the Messaging Factory important at all, if the quota is limited by number of “connection links” (clients). You are right. There is no correlation between Messaging Factory and quota of 100 connections.
    Remember, quota is limited by number of “connection links”. Messaging Factory helps you to increase throughput, but not the number of concurrent connections.
    Following picture illustrates this:

     
    The picture above shows maximal number of 100 clients (senders, receivers, queue clients, topic clients) which are created on top of 2 Messaging Factories. Two clients uses one Messaging Factory and 98 clients use another Messaging Factory.
    Altogether, there are two TCP connections and 100 “connection links” shared across two connections.

    How to observe huge number of devices?

    Probably most important question in this context is how to observe a huge number of devices (clients) if the Messaging entity limit is that low (remember 100). To be able to support for example 100.000 devices you will have to create 1000 connections assuming that one device creates one “connection link” through one physical connection.
    That means, if you want to send messages from 100.000 devices you need 1000 queues or topics which will receive messages and aggregate them.

    The quota table shown above defines also the limit of “concurrent receive requests”, which is right now limited by 5000. This means you can create maximal 100 receivers (or QueueClients, SubscriptionClients) and send 5000 receive requests shared across 100 receivers. For example you could create 100 receivers and concurrently call Receive() in 50 threads. Or, you could create one receiver and concurrently call Receive() 5000 times.

    But again, if devices have to receive messages from queue, then only 100 devices can be concurrently connected to the queue.

    If each device has its own subscription then you will probably not have a quota issue on topic-subscription, because one subscription will usually observe one device. But if all devices are concurrently receiving messages, then there is a limit of 5000 on topic level (across all subscriptions). Here can also be important another quota which limits the number of subscriptions per topic on 2000.

    If your goal is to use for example less queues, then HTTP/REST might be a better solution than SBMP and AMQP. If the send operations are not frequently executed (not very chatty), then you can use HTTP/REST. In this case the number of concurrent “connection links” statistically decreases, because HTTP does not relay on a permanent connection.

    How about Windows RT wind Windows Phone?

    Please also note, that Windows 8 messaging is implemented in WindowsAzure.Messaging assembly, which uses HTTP/REST as a protocol. This is because RT-devices are mobile devices, which are statistically limited to HTTP:80. In this case Windows 8 will not establish permanent connection to SB as described above. But it will activate HTTP polling, if you use Message Pump – pattern (OnMessage is used instead of on demand invoke of ReceiveAsync). That means the permanent connections to Service Bus will not be created, but the network pressure will remain du the polling process, which is right not set on 2 minutes. That means Windows RT will send receive request to Service Bus and wait for two minutes to receive the message. If the message is not received witching timeout period, request will time out and new request will be sent. By using of this pattern Windows RT device is permanently running on the receive mode.

    In an enterprise can happen that many devices are doing polling for messages. If this is a problem in a case of huge number of devices on specific network segment, you can rather use dedicated ReceiveAsync() instead of OnMessage. ReceiveAsync() operation connects on demand and after receiving of response simply closes the connection. In this case you can dramatically reduce the number of connections.

     

    About the author

    Damir Dobric is co-founder, managing director and lead architect of DAENET Corporation, which is Microsoft long term Gold Certified Partner and leading technology integrator specialized in Microsoft technologies with strong focus on Windows, .NET and Web. He is with DAENET winner of worldwide Technology Innovation Award for year 2005, German innovation award for SOA in 2008 and Microsoft Partner of the year 2010.

    Working with  Microsoft on helping customers to adopt new technologies, he has well over 25 years of experience as a developer, architect, speaker and author. His focus is on practical implementation of custom solutions, which target integration of devices and services.  Damir is Windows Azure VTSP, Docent for Software Engineering on University of applied sciences in Frankfurt am Main with focus on Distributed and Cloud Computing and member of Connected Technology Advisory and Windows Azure Insiders Groups. Visit his blog or follow him on Twitter.

     

    About MVP Monday

     

    The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

     

     


    0 0

    Editor’s note: The following post was written by SharePoint MVP Nicki Borell

    The Property Bag is a “store” within SharePoint which can be used to places information’s and metadata. Property Bag is a hierarchical structure starting at farm level and goes down up to list level.  Microsoft itself uses the Property Bag to store configuration settings and information’s. For details see that msdn article: Managing SharePoint Configuration

    For common information’s about the Property Bag please refer that msdn sites:

    The question is what benefit provides the Property Bag to us also in the context of search. In common we can use it to store variables or other metadata who belong to a site or list. Creating searchable properties in the property bag of a web, a site or a list allows us to search for them by the value of its property. Using that makes it easy to build search driven sites aggregating special subsides or list based on the value of such a searchable property. In case you work with custom website templates you can place a property in the Property Bag showing the type of your custom template. On the other side you can use search to say: ”Show me all sites where type is %whatever%

    We have several options to work with the Property Bag. In fact we can use SharePoint Designer, PowerShell and of course custom code. In addition there is a codeplex project named SharePoint Property Bag Settings.  It is providing a farm solution to work with the Property Bag using the SharePoint UI. Poorly not all options give us the same potential.

    Creating your own value in a Property Bag can be done with all options. But to create a property that is covered by search we have to respect some special things. For example creating a property in a Property Bag using PowerShell would work like this:

    $WebUrl = "http://%Server%/sites/ "

    $web = Get-SPWeb $WebUrl

    $web.Properties ["MyNewProperty"] = "MyValue1"

     

    But to make the property covered by search we need to do the following steps:

    $WebUrl = "http://%Server%/sites/ "

    $web = Get-SPWeb $WebUrl

    $web.AllProperties["MyNewProperty"] = „MyValue1"

    $web.IndexedPropertyKeys.Add("MyNewProperty")

    $web.Update()

     

    Doing this using custom code it would look like that:

    SPWeb web = SPContext.Current.Web;

    web.AllowUnsafeUpdates = true;

    web.AllProperties[“MyNewProperty”] = "MyValue1";

    web.IndexedPropertyKey.Add("MyNewProperty");

    web.Update():

    web.AllowUnsafeUpdates = false;

     

    Poorly this cannot be done using CSOM. IndexedPropertyKeys are not part of the ClientObjectModel and also not available via REST. Thanks to my MVP colleague and company associate Thorsten Hans for supporting me with this question.

    There is also no way to set this up using SharePoint Designer or the codeplex solution SharePoint Property Bag Settings. SharePoint Designer and the codeplex solution are helpful to see which properties already exist, to see which value they have or to change values.

    This screenshot shows the Property Bag using SharePoint Designer. By clicking the button “Site Options” in the Ribbon, the shown dialog comes up. In the dialog you can see a custom property I created called “PowerShellProp”:

     

     

    This screenshot shows the same information using the codeplex solution:

     

    To get a list of properties from a Property Bag we can also use PowerShell.

    $WebUrl = "http://%Server%/sites/ "

    $WebUrl.Properties | Format-List

    To get a list of all properties covered by the search we need to use that call:

    $WebUrl = "http://%Server%/sites/ "

    $WebUrl.IndexedPropertyKeys | Format-List

    Before we can use the new property within the search we need to do a full crawl. After the full crawl we had a new Crawled Property named like our Property we created in the Property Bag in our search schema. To verify that the new crawled property exists we had to browse the search schema as described in respective msdn article: View crawled properties and managed properties

    To use that property in search queries, search driven solutions or as a search refiner we need to map it to a Managed Property. Therefore we can use an already existing Managed Property or create a new one. How to do that mapping is described in following msdn article: map a crawled property to a managed property

    After another full crawl the property can be used within search.

    A hand on lab demo is shown in that video:

    (Please visit the site to view this file)

     

    About the author


    Nicki Borell (http://www.sharepointtalk.net/) is an evangelist & consultant in the Experts Inside team (www.expertsinside.com). He has worked for more than 13 years in Microsoft enterprise environments and also as a trainer and consultant for the SharePoint and SQL Server products. His expertise extends from technical consulting all the way to project management, with his core competencies covering KMU, enterprise environments and government data management. His special focus is on SharePoint Search Technologies. Nicki is Microsoft MVP for SharePoint, Microsoft Certified System Engineer (MCSE), Database Administrator (MCBA), IT Professional (MCITP) and Trainer (MCT). Follow him on Twitter.

    About MVP Monday

     

    The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.


    0 0
  • 11/26/13--08:47: Developer MVPs of the Year
  • Microsoft’s Developer Division has created what has become a tradition over the last several years: recognizing exceptional contributors in the already exceptional MVP community. Technical Fellows Anders Hejlsberg and Brian Harry were hosts to this year's event that celebrated 29 community leaders during the Developer MVP Recognition Dinner.
     

     
    "It was an honour to be invited and have a chat with everyone there. Who wouldn't want to have dinner with Anders Hjelsberg?" remarks Windows Azure MVP Maarten Balliauw

    MVPs and Microsoft development team members had a lot to discuss with the recent release of Visual Studio 2013 and Windows 8.1.  "Visual Studio 2013 and especially Blend brought a lot of improvements to Windows Store app development," says Silverlight MVP Laurent Bugnion

    "The biggest thing that stands out for me is the friendliness and openness of the Microsoft people that take time out of their busy schedules to come," said Client Development MVP Dan Wahlin.  
     
    Recipients of this award are selected by MVP Award and product group team members including Jason Zander, Scott Hunter and Scott Hanselman, as well as voted on by their peers. Their outstanding contributions to technical communities range from providing feedback to help improve Microsoft products to delivering solutions and answers in forums. They also provide content to the community at events, online, or in print.  Product team members  
     
    When asked what he thinks of the dedication and passion of MVPs, Brian Harry shared the following story from the MVP Global Summit:
     
    "Visual Studio ALM MVP Marcel de Vries from the Netherlands had already been asked to speak at VSLive on ALM topics when he found out the MVP summit dates.  Rather than canceling, he asked the VS Live organizers to schedule all FOUR of his presentations on one day.  He took the red eye to Florida landing at 7am, presented his 5 (he picked up another) sessions and was back at the MVP Summit the following day."


    S. Somasegar, Corporate Vice President of Microsoft’s Developer Division, recently discussed the importance of MVPs to Microsoft during an interview at the November 2013 MVP Summit said, "The face-to-face communication [with MVPs] enables Microsoft to share what we are currently working on, share our plans, share our ideas and get feedback."  He went on to say, “I see immense value in our MVP community. These are our best and most passionate customers, those who take it upon themselves to learn about everything we’re doing, to provide invaluable feedback, and to then help the rest of the world discover and make the most of our technologies. I view our MVPs as ambassadors to the technologies and work that we do at Microsoft. MVPs end up being a trusted connection for the engineering teams, helping both to guide the direction of our products and to help other customers successfully utilize Microsoft platforms and tools. I personally take any opportunity I get to spend time with MVPs around the world and to hear what they have to say: I find their perceptions of what we’re doing, how we’re doing it, and how we stack against the competition to be quite illuminating and insightful. In my experience, MVPs are both vocal and honest in their feedback to us – both where we are doing well, and more importantly where we should be doing better. That is super valuable.” 

    Congratulations to the following MVPs!

    Jeff Levinson

    ALM

    James Szubryt

    ALM

    Neno Loje

    ALM

    Nik Molnar

    ASP.NET

    Anthony van der Hoorn

    ASP.NET

    Unai Zorrilla Castro

    ASP.NET

    James Newton-King

    ASP.NET

    Maarten Balliauw

    Windows Azure

    Alan Smith

    Windows Azure

    Matthieu Mezil

    C#

    Bill Wagner

    C#

    Mitchel Sellers

    C#

    Justin Rusbatch

    C#

    Jason Follas

    C#

    Alon Fliess

    C++

    Brian Lagunas

    Client Dev

    Morten Nielsen

    Client Dev

    Laurent Bugnion

    Client Dev

    Dan Wahlin

    Client Dev

    Steve Syfuhs

    Dev Security

    Mathias Brandewinder

    F#

    Sam Vanhoutte

    MS Integration

    Savarana Kumar

    MS Integration

    Dave Campbell

    Silverlight

    Klaus Löffelmann

    VB

    Deborah Kurata

    VB

    Jake Ginnivan

    VSTO

    Rudy Huyn

    Windows Phone Dev

    Ginny Caughey

    Windows Phone Dev

     
     
     
     

    0 0

    Editor’s note: The following post was written by SharePoint MVP Alan Richards

    Microsoft Visio has been around for a number of years and as a product it has grown as new versions have been released. However it has always had problems when sharing diagrams produced with Visio, well with the 2013 versions of Visio, SharePoint and Office 365 all that has changed, now sharing is as easy as uploading the diagram to a document library and telling the users where it is.

    Unfortunately we haven’t got the blog space to go through the process step by step so the screenshots below and explanation show you what can be done with Visio Services, all the examples are using a simple list in Office 365 (Figure 1)

     

    Figure 1 - Showing SharePoint list

    Connecting Data

    The first step in the process is to create a Visio drawing with the data connected to it, Figure 2 is the first screen in a process that is fairly straightforward when connecting data and is launched using the Data | Link DataTo Shapes option on the ribbon, as you can see from Figure 2 Visio can connect to quite a few sources of data

     

    Figure 2 - Showing first screen in data linking wizard

     

     

     

    Once the data is connected it will appear in the bottom of the Visio screen as shown in Figure 3

     

    Figure 3- Showing data in Visio client

    Data Graphics

    Visio uses shapes and data graphics added to the page to visualise the data, Figure 4 shows some of the data graphics that can be used in Visio, these include text callouts, data bars & colour by value to name a few

     

    Figure 4 - Showing some of the data graphics available

    To add a data graphic to the page simply drag a row of data onto the Visio page and you should see the standard text callout appear as shown in Figure 5

     

    Figure 5 - Showing standard text callout data graphic

    I could just front all the data on the same page, which for this small amount of data wouldn’t be an issue. Figure 6 is an example of a simple data dashboard using all the data visualised at once

     

    Figure 6 - Showing simple data dashboard

    Bit boring though and also of no use if you have lots of data to present so let’s change it so that only the class is shown on the page and we can then create new pages with more data on them and use hyperlinks to jump between pages

    To edit the data graphics right click and select Data | Edit Data Graphic Figure 7 shows you the relevant screen now with only the class field present

     

    Figure 7 - Showing edit data graphic dialogue box

    Using Hyperlinks

    One of the great features of Visio is hyperlinks, these can be used to jump between pages, making it appear to the user that they are simply drilling down into the data

    Firstly create a new page and add the data you require to the page by dragging the row onto the page, then right click on the data graphic and remove the data graphic. The reason for this is that if you edit it then you will also change the graphic on your original page.

    Once you have removed the data graphic, right click on the data box and select Data | Edit Data Graphic and add the required data, Figure 8 shows the data selected from the list for this page

     

    Figure 8 - Showing data graphic properties

    We now need to flip back to Page 1 and create the hyperlink, to do this right click on the data graphic and select Hyperlink

    Figure 9 shows the dialogue box then you then use to create the hyperlink, in this case to Page-2. It could of course be an internet link or to any other page you created in what could be a very large multi-page Visio drawing

     

    Figure 9 - Showing hyperlink settings

    Using Visio Services

    All of that was great but up until now we have only been using Visio 2013, Visio Services comes into play when we upload the Visio drawing to SharePoint, in this case Office 365. Figure 10 shows the file saved to a document library in Office 365

     

    Figure 10 - Showing uploaded Visio drawing

    Clicking on the Class Size drawing will open it up within Office 365 and allow you to zoom in and out and scroll around the drawing; and the best thing the user wouldn’t need the Visio client to interact with the drawing. Figure 11 shows the drawing opened in Office 365

     

    Figure 11 - Showing Visio drawing open in Office 365

     

     

    So what about the hyperlink I hear you ask; well the hyperlink works as any hyperlink would work in a browser, clicking on the graphic for 10AR will take you to the page shown in Figure 12

     

    Figure 12 - Showing additional data page

    Comments

    As you can see from Figure 12 there is a Comments tab, click this will allow the user to add comments to the page and as more users add comments so the drawing becomes more of a collaboration

    Users obviously can’t change the data in the drawing but if they have access to the data source, they can change the data there and that will immediately be reflected in the Visio drawing.

    In Conclusion

    So what I have shown you today is a very simple Visio drawing with a simple hyperlink to give the impression of the user drilling down into the data

    This of course is only the tip of the iceberg when you start to use the Visio client and Visio Services. The data graphics shown are just simple boxes, as well as being able to use the shapes that we are all familiar with when it comes to Visio you can also, with a little more work, use images taken from the Internet or your own images. Another great feature is the ShapeSheet, using this you can change the shape, colour or size of shapes using the source data, the ShapeSheet is just like an Excel spreadsheet; it just controls the features of a Visio shape

    Images are only just a small element, there is also the interactive nature of the product now that users can add comments to drawings it becomes more collaborative than was ever possible with older versions of Visio and the best thing about all this, it can all be shared with users without them ever needed access to the Visio client

    Unfortunately there just isn’t enough blog space to detail all of the great new features of Visio client and Visio services but I am sure more blog posts will appear on both my blog and also in the wider community

    About the author


    Alan Richards has been working in the IT industry for over 17 years and during that time has been at the forefront of using IT. He has led teams that have been among the first to roll out Windows, Exchange and SharePoint, many of these successes have been showcased in Microsoft case studies. Recently Alan's work has concentrated on SharePoint & Office 365 and the implementation of these technologies in organisations to enhance business processes and efficiency. Alan is currently working as Senior Consultant at Foundation SP, a SharePoint & Office 365 strategy and delivery company who were recently awarded Microsoft Partner of the year for portals & collaboration. Alan is also a regular speaker & blogger and has been a SharePoint MVP since July 2011. Follow him on Twitter.


    About MVP Monday

    The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

     


    0 0

    What better way to find out what happens each year at the MVP Global Summit than from MVPs themselves.  

    The following is a list, in no particular order, of blog posts written by MVPs about the November 2013 MVP Summit. 

    MVP Summit Nov 2013 Recap - Windows Expert-Consumer  MVP Vasu Jain

    MVP Global Summit Wrap-Up - Exchange Server MVP Michel de Rooij

    MVP Summit 2013 – November - System Center Cloud and Datacenter Management MVP Daniel Neumann

    My First MVP Summit - ASP.NET/IIS MVP Terri Donahue

    MVP Summit November 2013 - Client Development MVP Client Development Teresa Dee Burger

    MVP summit – A great week has come to an end - Dynamics AX MVP Andre Arnaud de Calavon 

     

    If you have a blog post recap of your experience at the November 2013 MVP Summit and would like to include it on this list, please email the link here 

     


    0 0

    Editor’s note: In partnership with Microsoft Press, now celebrating their 30th year, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from Visual C++ MVP Alon Fliess which is the 34th in the series.  

    Introduction to C++ 11 in Visual Studio 2013

    I just had a conversation with one of my colleagues. He told me “I have started looking at C++". "I didn’t realize that it is such a productive language”, he added. You see, my colleague is a gifted C# developer, and he knew that C++ is an "old" language that one uses to program Operating Systems, Drivers, High Performance algorithms, communicate with hardware devices and make your life interesting but also complicated. My friend was born the year that Bjarne Stroustrup invented C with Classes, the first name that he gave to C++, at AT&T Bell laboratories.

    For a C# developer, C++, in many cases, is the legacy code that you need to interop with. For me and many other veteran developers, C++ is one of the sharpest tools in our toolbox. As a Windows developer, I tend to choose the right tool for the job, be it C++ native code, C# with .NET or even JavaScript.

    Modern C++ is a very strong programming language. You can use it to develop almost everything, from low level OS code to mobile applications. You can develop cross-platform code and libraries without sacrificing the quality of your software and the performance of your application.

    If you used to develop with C++ and moved to other programming languages, it is time to look again at one of the most evolving languages today: C++.  Click here to continue reading the full article…

    About the author

    Alon Fliess is the Chief Architect & Founder of CodeValue. CodeValue is the home of software experts. We build software tools, foundations and products for the software industry. We offer mentoring, consulting and project development services. Alon got his BSc degree in electrical & computer engineering from the Technion, the Israel Institute of Technology. Alon is also recognized as a Microsoft Regional Director (MRD) and a C++ MVP. Alon is the co-author of the "Introducing Windows 7 for Developers" book and technical reviewer of other books. Alon is an active member of several Patterns & Practices councils, among them Project Hilo – A Windows Store Application in C++/CX and XAML and Prism for Windows 8. Read his blog, download his debugging extension for Visual Studio or follow him on Twitter.

    About MVP Monday

    The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.


    0 0

    1. Dynamics CRM MVP Roohi Sheikh - Mobile Client for Dynamics CRM 2013
     
     
    3. Visual C# MVP Thomas Levesque - Uploading data with HttpClient using a "push" model
     
     
     

older | 1 | .... | 26 | 27 | (Page 28) | 29 | 30 | .... | 40 | newer