Quantcast
Channel: The Microsoft MVP Award Program Blog
Viewing all 788 articles
Browse latest View live

My Journey with the TEALS Program as a Microsoft MVP

$
0
0

Editor’s note: The following post was written by SharePoint MVP Chris Givens

My Journey with the TEALS Program as a Microsoft MVP

Earlier this year, our Community Program Mgr., Melissa Travers, sent out an email about a Microsoft program called TEALS to support this effort and recruit MVP volunteers. As a long time Microsoft Certified Trainer (MCT), and the fact my son Igor is in high school, I decided it might be something worth exploring, and being that it counts as a community contribution towards MVP Award, made it even more appealing as the Microsoft MVP Award program is one of the best in the industry!

What is TEALS?

TEALS stands for Technology Education and Literacy in Schools.  It is a program managed by Microsoft via their Corporate Citizenship initiatives to generate more skilled computer science graduates to fill the gap in the computer science job markets. (S. “Soma” Somasegar, Microsoft’s Corporate Vice President of the Developer Division is a big fan!) The program is targeted towards high schools that do not have a current computer science program due lack of funds or skilled staff to run such a program.  Through TEALS, Microsoft provides the instructors, (which are individuals like me who simply volunteer our time to teach), curriculum structure, and training to support the course. A goal of the initiative is to train one of the school’s teachers to be able to teach computer science down the road.

How does TEALS work and what do they teach?

TEALS works by sending volunteer individuals who work in the computer science field to high schools to teach students the basics of programming.  As of today, there are 2 curricula available: an intro course based on UC Berkeley’s The Beauty and Joy of Computing, and “Building Java Programs – A back to basics approach” 3rd Edition by Stuart Reges and Marty Stepp.  I know what you are thinking, why “Java” vs “.NET” since this is a Microsoft funded program?  The unfortunate thing is that most colleges are using Java in their Computer Science programs (as such was the case when I attended college at the University of Tulsa back in the day).  Java has been running on multiple devices and architectures for quite some time now so it makes sense to use it.  Additionally, and this is the more important part, the AP Computer Science exams are all in Java.

However, I’m confident that this will change moving forward with all the great announcements that the Microsoft .NET team made recently about changes in the C# language and making the .NET Core open source and the further collaboration with the awesome guys from the Mono and Xamarin projects!  As I mentioned in a tweet a few weeks ago, I really do think Microsoft put Java on life support but it will be a few years before the books and tests are rewritten using .NET.

Aside from the politics, both Java and .NET are object oriented languages. So the concepts around variables, methods, recursion, algorithms, abstraction and data structures all stay the same no matter what language is taught. J

 

Sounds interesting, how do I get started?

The first step, before you even contact the TEALS group, is to determine if you have the time to dedicate to the program.  In my case, our class is on a varying schedule.  Back in my high school days we had the same seven classes every day for a semester.  In some cases it still works like that, but in many I am finding that the Monday-Wednesday-Friday and Tues-Thurs schedules have won out so as to prepare students for college schedules.  In addition, some schools switch the days every week.  So one week you will be Mon-Wed-Fri, the next, you will be Tues-Thur.  For my class, we switch the days every week, but I got lucky and the course starts at 7:40am every morning and runs until 9:40am.  As you can see, if you have a job that allows some flexibility, then TEALS is something to consider.

Ok, I have time, how do I get started?

I’m sure many of you reading this are going to have schedules that will allow you to participate in this amazing program, you just have to mentally make the decision!  If you are still reading, you must be really interested and that’s awesome!  The second step is to visit the TEALS website* and determine if there are any TEALS Partner Schools near you.  If not, you can start a program in your area and there are some opportunities for remote instruction.  If you would like to submit an application for your local school, then do it!  More details and an information recording can be found on the Schools page.

The first person I would suggest you talk to is Kevin Wang or one of his helpful team members such as Vichi Jagannathan or Brigham Hall via info@tealsk12.org .  Mr. Wang started the TEALS program and although I have yet had the opportunity to meet him in person, I know he’s a very cool individual.  As the founder of the program, I don’t think he knew just how big the initiative would grow or how successful and impactful it would become, but I give him a ton of kudos for creating this amazing program!

Once you get ahold of one of the TEALS team members, they can walk you through the process of signing up new schools.  This will include requirements for the program and how you should approach presenting it.  After getting all the details, the next step is to meet with the principal or vice-principal of the proposed school and see if they would like to come onboard.

Ok, I found/created a program, how do I sign up for TEALS?

Once you have found a program in your area, you will need to sign up with the TEALS group.  In most cases, you’ll be placed on a teaching team that includes several other volunteers.  Worry not, with my busy schedule I have found having more than two people to be a good thing in case something unexpected comes up. And being that life happens, you may find that you and your fellow TEALS volunteers could all be out on a particular day.  That’s ok too, as every TEALS course is paired with a local teacher who helps out in case none of us can make it to class.  It is also helpful (and legally required as I’m sure most of us do not have the respective state teaching certificates) as each school will differ, having someone who can help you with those policies and procedures is a good thing.  In terms of the legal aspects, a high school’s classroom has many more rules than an adult higher education or adult training center which I’ll point out below. 

Do I need any type of experience or training?

In my case, I already had several years of experience being in a classroom with adults as an MCT.  Though, TEALS would prefer that you have done some type of teaching in the past and have the applicable skills necessary to teach the materials.  That being said, there are resources that are designed to train you on how to approach teaching in a high school environment and tips and tricks for delivering the material.  Although I did not attend any of these, I hear they are really good, especially the surprises you will run into in a high school classroom.

Legally speaking…

Now comes the legal part.  High schools in the United States take the safety and security of their students very seriously. Schools generally require a criminal background check, and some schools may even insist on a drug screening.  The exact procedure varies by school district, but you will probably be required to work with the high school’s approved contractor for those services.  For those of you who know me, luckily my doppelganger and I have never done anything really bad, so they have let me in the classroom!

Great, I made it through the interview and legal, what can I expect?

You can expect the unexpected!  For many of us, it has been many years since high school J.  You will slowly remember how it felt to be back in school.  From driving onto campus, to walking into the administration building to get your “Staff” parking pass, to the meeting with the principal (no you’re not in trouble this time), to walking around the buildings to find your classroom.  Oh, and not to mention, getting your picture taken for your staff badge and the yearbook.  Yes, there are some unexpected perks.  Like getting to use the staff bathroom and not the student bathrooms!

The next set of memories include walking into the classroom.  The rows of chairs and computers on the desks.  Only, this classroom doesn’t have old Apple green screen terminals.  The warning bell rings and the students slowly file in and pick seats next to their friends.  Yep, the clicks and stereotypes are still there.  In my case, I had about 20 students to start off with.  The bell rings again and everyone quiets down.  My particular school system uses Blackboard and so we immediately start taking roll call and recording their seating positions in the system. 

Bueller?  Bueller?  Bueller?

It is refreshing to say, that we have five female students in the classroom.  And at this point in the school year, my top student in the class is one of them!  This really wasn’t the case many years ago, but I’ll be the first to say…the best managers and developers I have ever worked with in my life (pretty much the entire five years I spent at IBM) were all women.  Hopefully, you will see the same evolution in your classrooms.

How is a day broken up?

We break our days up into parts.  First, we start with a quiz from a reading assignment.  We do the quiz to ensure they understood the material.  For each student who answers a question, we throw out some candy.  I didn’t realize how important the candy was until I forgot to do it one day.  After the quiz, we grade their homework from the last section’s materials.  We do this by trading papers with their peers in the class so they get to see other peoples work.  After grading the homework, we go into technical news.  What happened in the last couple of days that had to do with technology?  After the news, we move into lecture.  The course book we use is fairly well defined in terms of sections so we typically do one section each day.  As you can imagine, just like adults, it is hard to keep their attention for any amount of time over 5 minutes.  Techniques that you learn as an instructor is to keep people engaged.  For high school students, this is a bit tricky but in the end it’s the same as anyone else.  Put it into something that relates to what they do every day.  “Healthy” arguments with parents is a good one.  Oh, and having a second or third person in the room to walk around to check they are not playing games on the computer helps too!

How are the students?

One of the first things you will realize, kids taking Advanced Placement (AP) classes tend to be very smart.  Way smarter than we were in high school.  They are taking Physics, Calculus and many other advanced topics such as Economics (micro and macro).  In most cases, they are there because they want to be, however, you will find some students are there because they have to be due to not taking enough electives.  The students, who don’t want to be there, tend to be the ones who need the most help.    I made it a point to analyze all my student’s grades and submitted homework to deduce why they were not doing well.  Those on the bottom half of the line, I have made it a goal to ensure that they know I am here to help them. 

What have I learned so far?

Lots of small things, but one major one.  The TEALS program sends out a helpful teaching resource guide that tries to get you up to speed on high school teaching techniques. Most of it I had already learned from my many years as an MCT, however some of it was directly applicable to situations you might encounter in a high school classroom.  One such bit of guidance is an unspoken rule in teaching high school.  Students are not friends.  This was one of the most valuable things I learned and absorbed before walking into the classroom.  The rule states that the moment they become friends, you will find that they will take advantage of that friendship. Once other students see that you have some kind of favoritism, the classroom dynamics quickly fall apart.  I have had a few students try to play this card, but I gently let them know that everyone has the same deadlines and if I do something for one person, I have to do it for everyone. 

Why do TEALS?

I absolutely know, without a doubt, that some of the kids in my class are going to go on to do some very amazing things.  As a successful business owner, a college grad and once a high school student myself, I know how important it was for me growing up to have someone who inspired me and gave me the tools to be successful today.  I could never have made it here (writing this blog post) without a similar program in my high school called Technology Students Association or my high school TechEd teacher Mr. Kent Barton.  Every day I walk into that classroom, I know that I am helping to drive these brilliant minds to succeed and reach their full potential.

High schools in the US need MVPs, veteran consultants, and business owners like us to show students what they can achieve with the right support.  We can use our experience and knowledge and propel the next generation to places we have never been before.

Join TEALS and make a difference in the future of humanity!

*Note:  The TEALS website will be updated in mid-December with information pertaining to the 2015/2016 academic year

About the author

 

Chris Givens is the co-founder and CEO of Architecting Connected Systems (ACS).  ACS is a courseware development company that builds SharePoint courseware for Microsoft that is then distributed to training centers around the world.  ACS titles include top courses in SharePoint Development and Business Intelligence but also materials that span every aspect of SharePoint.  In addition to courseware development, Chris has led, as Sr. Architect, many large upgrades and installations of SharePoint for large companies such as eBay and General Atomics.  He is co-founder and President of the San Diego SharePoint User Group (SanSpug.org); originally from Oklahoma; is a Computer Science major from the University of Tulsa; and currently resides in San Diego with his lovely wife Lidiya.  Follow him on Twitter

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.


First Ever U.S. Open Day Event - March 2015

$
0
0

 

We're pleased to announce the MVP Award Program will host a first everUS MVP Open Day” at the Microsoft Office in Malvern, Pennsylvania. The event will be a great opportunity for MVPs to connect and engage with fellow MVPs and with local Microsoft stakeholders.

General Information and Location:

It will be a two-day event at the Microsoft Office in Malvern, Pennsylvania on Friday, March 27 and Saturday, March 28.  The agenda and overall schedule are currently under development; however, here is the general schedule: 

Day One:  9 a.m. to 10 p.m.

Day Two:  9 a.m. to 5 p.m.

Breakfast

Breakfast

Activities and Sessions

Activities and Sessions

Lunch

Lunch

Activities and Sessions

Activities and Sessions

Social Event

Farewell

Preliminary Agenda:

MVPs and Microsoft stakeholders will deliver technical and non-technical sessions which are of interest to all MVPs across the business segments (Developer, Consumer and IT Pro).   We will also kick off the event with a Microsoft strategy presentation delivered by Microsoft stakeholders.  The final agenda will be published as soon as it is available and our goal is to finalize sessions and speakers by February 15, 2015.     

We will continue to update this post with additional information as it becomes available.  See you all in March, MVPs! 

 

Cord-Cutting with the Xbox One

$
0
0

 Editor’s note: The following post was written by SharePoint MVP Corey Roth 

In the last year or so I have proudly referred to myself as a “cord-cutter”.  I’ve dropped my cable / satellite subscription in favor of streaming services and an over-the-air antenna.  When it comes to streaming, of course the XBOX ONE excels with services such as Xbox Video, Netflix, Hulu, Vudu, NBA TV and Amazon.  How is that any different than any of the many streaming devices out there such as Roku, Amazon FireTV, Chromecast and more?  From a standpoint of streaming alone, there really isn’t much difference.  One place that really sets Xbox One apart though is its TV functionality via HDMI pass-through and OneGuide.  This allows you to watch TV while still using other Xbox One apps at the same time with Snap mode.  A great feature when you want to play a game but still keep an eye on the score of the game.  I wanted to make use of it with over-the-air TV, but I found that there wasn’t a lot of information out there on it to date.

 

Setting up over-the-air TV

When first looking at this feature, it is 100% clear that this feature is designed for those with a cable or satellite box.  That’s not me.  When setting up Xbox One, it allows you to set up the device relatively easily to control your cable or satellite box.  To do this though you will either need an Xbox with a Kinect sensor or an inexpensive IR transmitter cable.  Personally I went with the IR transmitter cable because using a Kinect in my media room would be quite difficult because I keep all of my electronics in a closet.  Shop around for these cables because they tend to range in price quite a bit.

Using this IR transmitter, you simply run a cable from your XBOX to the location of the IR receiver port on your various electronics.  They make transmitters with multiple emitters so you can control your TV, tuners, and even your receiver all from one cable.

Now you might be wondering, how to use the Xbox One with an over-the-air antenna since you aren’t using a cable box?  The answer is actually to acquire a simple over-the-air antenna tuner box.   These run in price between $30 and $50 USD and can usually be found at your local electronics retailer or Amazon.  I personally went with the HomeWorx HW-150PVR and recommend it because it actually works with the Xbox One.  The box has an HDMI out which you can plug into the A/V HDMI input port of your Xbox One.

My first over-the-air tuner box was not supported.   The Xbox One supports many brands, but be warned that some of the tuners are from brands you have never heard of.  The Xbox One simply doesn’t support them all.  I’ve looked for a supported list on the Internet but I haven’t seen one to date.  

When you find a tuner you like, you will plug in an over-the-air antenna into the RF IN port.  I have a few indoor antennas, but I’ve had pretty good success with this RCA Amplified TV Antenna.   Where you live and various other factors will affect the number of channels you receive.  I happen to live within line of sight of a number of broadcast TV towers so I can receive more than 60 channels.  Most you likely won’t care about.  The important thing for me was to get the major networks so I could watch live events like football and basketball.

When you first launch OneGuide, it will walk you through setting up the device with your electronics.  Part of the process is to find your TV, tuner, and audio receiver and test whether or not the Xbox One can control them. If your Kinect can broadcast the IR signals to your devices or if you have positioned your IR transmitter in the right place on your devices, it should be able to successfully power them off, change the volume, and change the channel.  If it doesn’t work, you likely don’t have the transmitter or Kinect positioned right.

Watching over-the-air TV

Once everything is configured, you can tell Xbox One that you are using over-the-air TV.  It does a pretty good job of pulling down all of the channels in your area.  However, you may find that things are missing or it shows you channels you can’t receive.  The listing probably won’t match up perfectly but it will probably cover the core network channels which are probably the ones you are looking for anyways.  When you select a channel through the OneGuide, it will send the remote control commands to choose that channel.  You can flip quickly through channels using the up and down buttons on the remote.  If you don’t have a remote yet, don’t worry.  You can use your Xbox One Controller.  Just press the X button to launch OneGuide and press the A button to select a channel.

 

Many of the over-the-air tuners also support plugging in USB flash drives and hard drives.  These can be used to add DVR-like functionality allowing you to record shows and pause live TV.  If you select a show in OneGuide, it will even set up a recording for you but your mileage may vary.  The experience will not be as streamlined as with a cable or satellite DVR. 

For those of you in Europe, you may have already heard about the Xbox One Digital TV Tuner.  This simple USB device takes the signal from an over-the-air antenna and allows you to use the storage on your Xbox One as a DVR.  I’d love to see this make its way to the United States.

I’ve cut the cord and I haven’t looked back for a second.  It used to be hard being a cord cutter, but it’s getting easier every year.  Just recently, CBS announced an online service to watch live TV.  They don’t have an Xbox One app yet, but I hope we see it soon along with other broadcasters.  Occasionally, I wish I had a way to watch ESPN, but I have been able to get by without so far.  As a recap, if you are a cord cutter and are considering the Xbox One, remember:

  • You need a Kinect or an IR transmitter cable
  • You need a supported over-the-air HD tuner with HDMI output
  • You need an over-the-air antenna
  • Apps aren’t available for some streaming services yet

If you are thinking about cutting the cord and using your Xbox One, I hope this guide was useful.  Be sure and refer to Set up live TV with your Xbox One for more details on getting started.

Although Corey Roth has mentioned products in this article, they are not an endorsement by him or Microsoft.  They are simply products he has experience with and they have worked for him.  Your experience may vary.

 

About the author

 

Corey Roth is a SharePoint consultant specializing in solutions in the Oil & Gas Industry.  He is a four-time recipient of the Microsoft MVP award in SharePoint Server.  Corey has always focused on rapid adoption of new Microsoft technologies including SharePoint 2013, Office 365, and  Visual Studio 2013.  When it comes to SharePoint, he specializes in Office 365 deployments, Enterprise Search, and Apps.  As an active member of the SharePoint community, he often speaks at conferences, events, and user groups.   Corey has a blog at (www.dotnetmafia.com) where he posts about the latest technology and SharePoint and he develops Office Apps for his company SP2 (www.sp2apps.com).  Corey is a huge fan of the Surface line of products.  Follow him @coreyroth on Twitter.

About MVP Mondays

 

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

Working as a Network - Yammer as a Distribution Group

$
0
0

Editor’s note: In partnership with Microsoft Press, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from Office 365 MVP Darrell Webster which is the 46th in the series.  

We have a number of communication tools available to us now via the different Microsoft platforms and services. Many of us belong to multiple email distribution groups and receive messages from them daily. Distribution groups are used for group communications, notifications, reporting and asking for help "to find those lost keys".

In this blog post, I take a look at Yammer, the Enterprise Social communication platform. I discuss briefly how communication differs from that of email. But then I introduce a few steps to help us get the best of both, using Yammer via email and through conversation posts on the Yammer website. We can make Yammer behave like a Distribution Group.

Yammer can be most simply described as a private collection of places for conversation for your organisation. Conversations are in a forum format. The different places for conversation are called Yammer Groups. The collection of groups are called a Yammer network.  

Conversations are most commonly started by visiting a Yammer group from your web browser and posting a conversation in the group’s conversation feed. It is very much like posting the start of a conversation or a question on a forum. People reply to the posts and to each other's replies.

Groups in a Yammer network can be Public or Private.

  • If a group is public:
    • Anyone in the Yammer network can join it.
    • Conversations and posts to the group are open to the rest of that Yammer network.
  • If a group is private:
    • Members need to be invited to join the group. Others can request to join, but won't be given access until a group admin grants membership to the group.
    • Conversations and posts are kept within the group. They are not visible to the rest of that Yammer network.

 

Email distribution groups that use Microsoft Exchange can also be configured as Public or Private. These are Open or Closed distribution groups. Most distribution groups are created as closed groups. Members can only be added to the distribution group if an admin adds the member. But the lesser used Open distribution group allows people to discover the group from the Global Address List and join the group.

The main difference between a distribution group and a Yammer group is the discoverability of conversations.

  • Email distribution groups - You will only receive a conversation if you are a member of the group at the time the message was sent. Only members of the group will receive messages. Depending on how long they have been a member and how they organise their email, they will have a history of conversations.
  • Yammer groups - Conversations are kept in one place, discoverable by all who have access to the group. Conversations can be discovered more easily because of the open nature of a forum. The Yammer newsfeed on the home page of the network will display new conversations or replies as they are posted. It's very easy to read something new and join the conversation. Conversations can be tagged with a topic, making them even easier to discover in a search.

Conversations in Yammer can be more easily discovered and joined than conversations sent via a Distribution List.

We can access Yammer via a web browser and the Yammer app for mobiles and tablets. But one very useful way to keep in communication with your group’s conversations is through use of email notifications. If you have posted a reply to a conversation, you will receive an email when others post their responses. When you reply to the Yammer email notification, your reply is added to the conversation thread in the website.
The added benefit of Yammer email notifications is that you can access Yammer from any device where you can email from.

Click here to continue reading full article

 

About the author

Darrell is a Training Specialist for ShareThePoint.com. His key interest is helping businesses and schools to understand how to effectively use Office 365 online services, to work from anywhere using mobility, device and cloud technologies. He focuses on getting the knowledge to the end user, so they can make the most of the tools they have available. Darrell says "with a name like Webster, I was born to work with online services."  Find more great Office 365 and Yammer articles on his blog or follow him on Twitter 

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

 

On the Road with my Surface Pro 3

$
0
0

Editor’s note: The following post was written by Windows Entertainment and Connected Home MVP Barb Bowman

I think I’ve been on a quest for a perfect travel laptop that does it all for me since I first owned a luggable Tandy FD 1400 dual floppy disk 12 pound “portable”. Prior to the release of the Surface Pro 3, the lightest and best solution for me was a MacBook Air running Windows inside Parallels.  With the launch of the Surface Pro 3, everything changed. Especially for a geek addicted to music, photography, and movies. For the first time ever, I’m no longer looking for a better computer. I’ve got one that does everything I need.

There really are a lot of things to love about Microsoft’s Surface Pro 3. It starts with the impossibly thin and light construction and gorgeous screen, real computing power for GPU and processor intense apps like Photoshop and Lightroom, and continues through an awesome multimedia experience that is easily extended to big screen TV’s using a Miracast dongle like the Wireless Display Adapter from Microsoft. Obviously, as a powerful x86 computer, the SP3 runs both Windows Store Apps and installable software, like Office 365, Corel Painter, Adobe Premier Elements, etc. I’ve got tablet style entertainment and information apps (including quite a few that I wrote) for casual use, too.

Over the Thanksgiving 2014 holiday, a bad winter storm knocked out power here and we had a three day estimate for power restoration statement from the local electric company. So, with bitter cold and no heat, I packed up what I needed, including my “computing to go kit” – my SP3, my digital camera, my Microsoft Wireless Display Adapter, external Western Digital USB 3.0 1TB drive, extension cord, USB to AC power adapters for various peripherals, and my Parrot Zik wireless headphones. And then took a short trip to a nearby Marriott Courtyard that had not lost power.

My Surface Pro 3 proved once again to be the key to a great “you can take it with you” experience.

Great Entertainment Experience with a Surface Pro 3 and Microsoft Wireless Display Adapter

I hate the entertainment choices on most hotel TV’s, many have newer HD TV’s and then only offer Standard Definition TV with the occasional High Def HBO thrown in. The Marriott Courtyards have 32 inch flat screen HD TV’s and my first challenge was to connect my Microsoft Wireless Display Adapter to the TV’s HDMI port and configure the TV to use that input. Why was this a challenge? The Courtyard TV’s use Lodgenet and the standard TV remotes don’t have an easy function to switch inputs to anything other than the Lodgenet ANT IN connection.

Physically connecting to an HDMI port was easy. I plugged my Microsoft WDA right in. I needed external power for the WDA, but since I travel with USB to AC adapters and an extension cord, I was all set. You just never know when an extension cord will come in handy.

 

The hotel television had HDMI connections, but no USB for power.

Switching inputs was not so easy. There were no physical buttons on the TV itself and the TV remote was a special hotel style remote with limited functionality.  But for fellow travelers scratching their heads on how to switch to an HDMI input, here’s the process that worked for me.

Press the Function button on the remote.

 

Switch Inputs with the “Function” button if there is no source/input button

If you’re lucky, you’ll get to the menu that lets you switch HDMI 1 by selecting Source and then selecting the proper input.

 

Buried/hidden menu to select a different TV input

Once HDMI 1 was selected and active, as the WDA (already paired with my SP3) had been plugged in for several minutes, it was ready to go and I saw the familiar “Ready To Connect” screen.

 

The Microsoft Wireless Display Adapter is Ready to Connect

 

On my Surface Pro 3, I selected Charms, Devices, Project, and again, since my Microsoft WDA was already paired, the device immediately was listed and I selected it and immediately connected. My Start Screen was displayed.

 

Success! My Surface is mirrored using Miracast!

I’ve stored a few of my favorite movies on an SDXC card that I leave in the SP3’s slot. I use CyberLink Power Media Player to play my movies because it supports the MKV file format. (Note that native MKV support is coming in Windows 10, along with FLAC audio support so down the road you won’t need third party apps to enjoy these formats).  Any full HD (1080p) movie looks great on the Surface Pro 3’s screen, albeit with black bars top and bottom to compensate for the SP3’s native 3:2 form factor. Streaming to a 1080p/16:9 HD TV removes the letterboxing and it’s a visually super experience.

 

One of my favorite movies is Avatar. And it looked fantastic in full HD on the Marriott Courtyard’s 32 inch HD screen using Miracast for streaming.

 

Watching Avatar on the hotel television with Miracast streaming

Once a movie starts playing, I again accessed Charms, Project, and selected Second Screen only which keeps my SP3 nice and cool. On this particular night, I watched two movies in a row, streaming from my SP3 to the Microsoft WDA with no glitching or disconnects.

 

I watch movies using the Second screen only setting

Surface Pro 3 and Bluetooth Headphones Trump Tinny Sounding TV Speakers

When a hotel television’s built in audio is less than optimal (and when listening to music on my SP3 as well), I connect my Bluetooth Parrot Zik noise cancelling headphones. By default, once a Miracast connection is made, both video and audio are sent to the Microsoft WDA. I wanted to use the TV for the video display and my Bluetooth headphones for the audio. This is an easy task. From the SP3, I then open classic Control Panel, Sounds, and configure the Headphones as the default sound device. I’m sure the guests in the next room appreciated this later at night when they were trying to sleep.

 

It’s easy to change the playback device and customize your entertainment experience

My Surface Pro 3 as a Portable Photo Studio

I’ve been taking pictures for over 40 years (see some of my work at http://barbbowman.net and http://photos.digitalmediaphile.com). The Surface Pro 3 makes a great on the go image processing computer. All I need is the SP3, Photoshop 5, Lightroom 5, the SP3 pen, my external WD USB 3.0 1TB external drive and a SD card adapter to import images (I use an Anker USB 3.0 card reader). 

Lightroom 5 “just works” on my Surface Pro 3, and I can use the SP3 pen for all functions. The menus and the icons scale properly (see the composite screen capture below) and it’s a great experience to use my SP3 in tablet mode to make adjustments in Lightroom. Before I left for the Marriott Courtyard after the power went out, I grabbed a few shots of the heavily snow and ice laden trees on our property which I processed once I was settled in at the hotel.

 

Lightroom 5 performs beautifully on my Surface Pro 3

I attach a WD USB 3.0 1 TB external hard drive when on the road when I have a large number of images to import. When I return home after a photo editing session, I copy the exported photos to a NAS drive on my home network and also copy the “keepers” to OneDrive.

I use Photoshop 5 (and not the newest Creative Cloud version) because the older version meets my needs as I do most of my adjustments in Lightroom.

One of the biggest complaints about using Photoshop 5 on the Surface Pro 3 is the size of the icons and menu items. I’ve solved that problem as you may be able to see in the screen capture below.

 

Photoshop 5 on my Surface Pro 3

You can read about how I solved the icon and menu size problem on my blog.

Using Photoshop, I usually prepare a smaller, resized image for my Photo Blog and occasionally make other adjustments. When I’m finished, if the resized image is for my Photo Blog, I’ll prepare a post in Windows Live Writer, drag the image into the post and publish it.

My Surface Pro 3 Connects my World

I’ve got a slew of Insteon devices and security cameras, and when I’m away from home, barring power outages, I can use the Windows Store Insteon App to set schedules, turn lights on and off and look at the real time output from my cameras any time I’m connected to the Internet. And of course I receive email alerts from the Insteon devices that I’ve configured for that functionality.

Whether I’m on an overnight or longer trip or just out for the day, my Surface Pro 3 has become an extension of my digital and entertainment lifestyle. With all of the ways to wirelessly connect devices, it’s a streaming media control center and so much more. It’s the perfect travelling computer for a Windows Entertainment and Connected Home MVP like me.

 

About the author

Barb Bowman has been a teacher of Philosophy 101, worked in ski area design in Aspen, Colorado, and served as a production director for U.S. and U.K. wallpaper sample photography. In 1990, she partnered with a French DOS shareware writer and entered the brave new world of online services. Over the next several years, as a consultant for Prodigy Services, Barb wrote user documentation and conducted testing for Prodigy's first Windows applications. She was part of the Internet product development team for MediaOne and later AT&T Broadband, and Comcast High Speed Internet and was a Microsoft Expert Zone and Vista Community Columnist. She is a Windows Entertainment and Connected Home MVP whose interests cover the Connected Wireless Home, Wireless Streaming and Miracast, and just about anything that can be connected to an 802.11/a/g/n/ac wireless network . Barb relaxes by shooting naturescapes with her digital camera and playing guitar and piano. She lives in New Hampshire with her Sony robot Aibo dog named Toto. She’s written over 15 apps form the Modern UI Windows 8.1 experience and you can find her apps in the Windows Store by searching for Digitalmediaphile. Barb's technical blog can be found at http://www.digitalmediaphile.com/ and her photo blog resides at http://photos.digitalmediaphile.com. Follow Barb on Twitter.

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

Dynamics CRM 2015 Calculated Fields

$
0
0

Editor’s note: In partnership with Microsoft Press, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from Dynamics CRM MVP Leon Tribe which is the 47th in the series.  

Dynamics CRM 2015 Calculated Fields

CRM 2015 is now out and available. If you are interested in seeing it, get a 30 day trial as these are now provisioned as CRM 2015 by default.

Obviously there are a lot of new features in the new version but one which particularly excites me are the calculated fields. This is a feature common to many CRM systems but, until CRM 2015, the only way we could emulate the behaviour in Dynamics CRM was through script on the form (or possibly plugins and workflows).

CRM 2015 changes the games in this regard. Fields can now be added to CRM and set to automatically calculate using Excel-like formulae. Given the potential of this I thought I would walk through setting up the fields and the functions available.

How to Set Up Calculated Fields

To set up a new calculated field, you create the field, as normal, and you will see a Field Type drop down. Changing this from Simple to Calculated makes the field a calculated field.

 

To define how the field is populated, you click the Edit button and set the field up, in a similar way to how Business Rules are set up in CRM 2013/2015. 

 

 

In this example, my new field gets populated with a concatenated version of the address. For the Action formula, the only fields you can access are those on the same entity; you cannot use fields on related entities.  Click here to continue reading full article

 

About the author

 

 

Leon helps others make the world a better place by using technology to make them more efficient and effective. The technology he employs is Dynamics CRM. He has worked with CRM systems since the late 1990s and now works as a Principal Consultant for Oakton in Sydney, Australia. An engaging public speaker, highlights include presenting at Microsoft’s Headquarters at Redmond and co-presenting at Microsoft’s offices in Sydney to hundreds of not-for-profit representatives. In recognition of his passion and focus on Dynamics CRM, Leon was awarded Microsoft’s Most Valuable Professional (MVP award) in 2009.  Follow Leon on Twitter

 

 About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

Share and Collaborate with Your Team Using Office 365

$
0
0

Editor’s note: The following post was written by Windows Expert Consumer MVP Rayda D 

 

Introduction:

A great product solution plays a big role in a business to have a great team involvement.  Increase your team’s collaboration by sharing Office applications and tools (e.g. OneDrive, Lync, Exchange, and SharePoint) and also review and/or edit the files at the same time.  Work effortless with limited complications with your team while being productive. Take it with you and multi-task anywhere, while on the road and stay current with your Team’s approach towards a goal to be completed.

 

What does it take to be a good team player?  A good team player uses a great tool to share and collaborate to each other.

Share:

Whether you are on the road or sitting in front of your desk, you may share, sync and stay current with your work files.  It is important in a team to keep everybody informed and keep track of the changes of important documents. Your teammate can work at the same time as you, online or offline using any mobility devices – Windows, Apple and Androids.

 

Brain storm with your Team and use Office 365 applications to edit a Word document, Excel Spreadsheet or a PowerPoint presentation. Exchange your thoughts and converse thru emails or online meetings.

 

 

I work on my files remotely anytime that I wish to access thru OneDrive.  It’s a convenience for me particularly if I have to share these files with my Team or a co-worker and able to give them the right permission to edit or read/view the document.

 

 

Collaborate:

 

Get organize while managing your daily tasks and work together with secured access on group engagement.

 

 

For me, working, planning and projecting together is important using a great tool. I will need it to connect with my colleagues by using web conferencing, instant messaging, Exchange Online, Microsoft Lync and collaborate further with SharePoint thru Workspace, Group Work site, Project Tasks and a lot more. Urgent meetings comes up unexpected and I definitely don’t want to miss important appointments with my team. 

 

Summary:

Office 365 is a must have product to improvise your team’s effectiveness. As I have mentioned, effective workflow on planning and projecting is the key to a successful collaboration with your team. Office 365 is an advanced product solution that will make your life easier without worrying on missing a deadline and that these files are secured.

Work effortless with limited complications with your team and ensure that all can be productive. Office 365 has it all for your business needs and compatible with any multi-platform devices.

 

As an IT Professional, Manager and a Consultant for Small to Medium size businesses – I use (loved the integration to any devices) and administer Office 365 for Workgroup and Domain network infrastructure. It’s truly an honor to share my knowledge thru this Article. Thank you!

 

About the author

Rayda (or Ray) is a hardworking person and very energetic. She has a full-time job as an IT/Telecommunications Manager in L.A. County working for an environmental remediation company for 16 years. Whenever she finds time, she visits, contributes and devotes her time in a few Forums - Tech Support Forums (home Forum), she also visits Sysnative, Technet, Seven/Eight Forums and write Blogs/Articles/Tutorials on SMBs and Microsoft Products - http://2xgjeuelsdome.wordpress.com/ and http://jeuelsdomeblog.wordpress.com/. These community contributions are very rewarding. Ray is a Windows Expert Consumer MVP, although she’s in a Windows Expert Consumer category, she is also an IT Professional that deals with a variety of Microsoft consumer products – software, hardware for personal and business use. She is keen in learning and discovering new technologies to help advanced her knowledge and share these with others.

When she’s not busy doing online community activities, she plays with her Pekingese dog Poki, eat good food and travels a lot. If you need to find out more about Ray, please read her full interview here: http://www.windowstalk.org/2014/06/rayda-mvp-interview/#comments  by James M. Fischer. Follow Ray on Twitter: @Ray2xg

 

 About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

 

SQL Server 2014 DML Triggers: Tips & Tricks from the Field

$
0
0

 Editor’s note: The following post was written by SQL Server MVP Sergio Govoni

SQL Server 2014 DML Triggers: Tips & Tricks from the Field

SQL Server 2014 DML Triggers are often a point of contention between Developers and DBAs, between those who customize a database application and those who provides it. They are often the first database objects investigated when the performance degrades. They seem easy to write, but writing efficient Trigger, though complex have a very important characteristic: they allow solving problems that cannot be managed in any other application layer. Therefore, if you cannot work without them, in this article you will learn tricks and best practices for writing and managing them efficiently.

All examples in this article are based on AdventureWorks2014 database that you can download from codeplex website at this link.

Introduction

A Trigger is a special type of stored procedure: it is not called directly, but it is activated on a certain event with special rights that allow you to access in-coming and out-coming data that are stored in special virtual tables called Inserted and Deleted. Triggers exist in SQL Server since the version 1.0, even before CHECK constraint. They always work in the same unit-of-work of the T-SQL statement that has called them. There are different types of Triggers: Logon Trigger, DDL Trigger and DML Trigger; the most known and used type is Data Manipulation Language Trigger, also known as DML Trigger. This article treats only aspects related to DML Triggers.

There are many options that modify run time Triggers’ behavior, they are:

  1. Nested Triggers
  2. Disallowed results from Triggers
  3. Server Trigger recursion
  4. Recursive Triggers

Each of these options has, of course, a default value in respect to the best practices of Triggers development. The first three options are server level options and you can change their default value using sp_configure system stored procedure, whereas the value of the last one can be set at the database level.

Are Triggers useful or damaging?

What do you think about Triggers? In your opinion, based on your experience, are they useful or damaging?

You will meet people who say: “Triggers are absolutely useful” and other people who say the opposite. Who is right? Reading the two bulleted lists you will find the main reasons of the two different theory about Triggers.

People say that Triggers are useful because with them:

  • You can develop customize business logics without changing the user front-end or the Application code
  • You can develop an Auditing or Logging mechanism that could not be managed so efficiently in any other application layer

People say that Triggers are damaging because:

  • They can execute a very complex pieces of code silently
  • They can degrade performance very much
  • Issues in Triggers are difficult to diagnose

As usual the truth is in the middle. I think that Triggers are a very useful tool that you could use when there are no other ways to implement a database solution as efficiently as a Trigger can do, but the user has to test them very well before the deployment in a production environment.

Triggers activation order

SQL Server has no limitation about the number of Triggers that you can define on a table, but you cannot create more than 2.147.483.647 objects per database; so that the total of Table, View, Stored Procedure, User-Defined Function, Trigger, Rule, Default and Constraint must be lower than, or equal to this number (that is the maximum number that will be represented by the integer data type).

Now, supposing that we have a table with multiple Triggers, all of them ready to fire on the same statement type, for example on the INSERT statement: “Have you ever asked yourself which is the exact activation order for those Triggers?” In other worlds, is it possible to guarantee a particular activation order?

The Production.Product table in the AdventureWorks2014 database has no Triggers by design. Let’s create, now, three DML Triggers on this table, all of them active for the same statement type: the INSERT statement. The goal of these Triggers is printing an output message that allows us to observe the exact activation order. The following piece of T-SQL code creates three sample DML AFTER INSERT Triggers on Production.Product table.

USE [AdventureWorks2014];

GO

 

-- Create Triggers on Production.Product

CREATE TRIGGER Production.TR_Product_INS_1 ON Production.Product AFTER INSERT

AS

  PRINT 'Message from TR_Product_INS_1';

GO

 

CREATE TRIGGER Production.TR_Product_INS_2 ON Production.Product AFTER INSERT

AS

  PRINT 'Message from TR_Product_INS_2';

GO

 

CREATE TRIGGER Production.TR_Product_INS_3 ON Production.Product AFTER INSERT

AS

  PRINT 'Message from TR_Product_INS_3';

GO

 

Let’s see all Triggers defined on Production.Product table, to achieve this task we will use the sp_helptrigger system stored procedure as shown in the following piece of T-SQL code.

USE [AdventureWorks2014];

GO

EXEC sp_helptrigger 'Production.Product';

GO

 

The output is shown in the following picture.

 

Picture 1 – All Triggers defined on Production.Product table

 

Now the question is: Which will be the activation order for these three Triggers? We can answer to this question executing the following INSERT statement on Production.Product table, when we execute it, all the DML INSERT Triggers fire.

USE [AdventureWorks2014];

GO

 

INSERT INTO Production.Product

(

  Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,

  ReorderPoint, StandardCost, ListPrice, DaysToManufacture, SellStartDate,

  RowGUID, ModifiedDate

)

VALUES

(

  N'CityBike', N'CB-5381', 0, 0, 1000, 750, 0.0000, 0.0000, 0, GETDATE(),

  NEWID(), GETDATE()

);

GO

 

The output returned shows the default Triggers activation order.

Message from TR_Product_INS_1

Message from TR_Product_INS_2

Message from TR_Product_INS_3

 

As you can see in this example, Triggers activation order coincides with the creation order, but by design, Triggers activation order is undefined.

If you want to guarantee a particular activation order you have to use the sp_settriggerorder system stored procedure that allows you to set the activation of the first and of the last Trigger. This configuration can be applied to the Triggers of each statement (INSERT/UPDATE/DELETE). The following piece of code uses sp_settriggerorder system stored procedure to set the Production.TR_Product_INS_3 Trigger as the first one to fire when an INSERT statement is executed on Production.Product table.

USE [AdventureWorks2014];

GO

 

EXEC sp_settriggerorder

  @triggername = 'Production.TR_Product_INS_3'

  ,@order = 'First'

  ,@stmttype = 'INSERT';

GO

 

At the same way, you can set the last Trigger fire.

USE [AdventureWorks2014];

GO

 

EXEC sp_settriggerorder

  @triggername = 'Production.TR_Product_INS_2'

  ,@order = 'Last'

  ,@stmttype = 'INSERT';

GO

 

Let’s see the new Triggers activation order by executing another INSERT statement on Production.Product table.

USE [AdventureWorks2014];

GO

 

INSERT INTO Production.Product

(

  Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,

  ReorderPoint, StandardCost, ListPrice, DaysToManufacture, SellStartDate,

  RowGUID, ModifiedDate

)

VALUES

(

  N'CityBike Pro', N'CB-5382', 0, 0, 1000, 750, 0.0000, 0.0000, 0, GETDATE(),

  NEWID(), GETDATE()

);

GO

 

The returned output shows our customized Triggers activation order.

Message from TR_Product_INS_3

Message from TR_Product_INS_1

Message from TR_Product_INS_2

 

In this session you have learnt how to set the activation of the first and of the last Trigger in a multiple DML AFTER INSERT Triggers scenario. Probably, one question has come to your mind: “May I set only the first and the last Trigger?” The answer is: “Yes, currently you have the possibility to set only the first Trigger and only the last Trigger for each statement on a single table”; as a friend of mine says (he is a DBA): “You can set the activation only of the first and of the last Trigger because you should have three Triggers maximum for each statement on a single table! The sp_settriggerorder system stored procedure allows you to set the first and the last Trigger fires, so that the third one will be in the middle, between the first and the last”.

Triggers must be thought to work on multiple rows

One of the most frequent mistakes I have seen during my experience in Triggers debugging and tuning is: the author of the Trigger doesn’t consider that his Trigger will work on multiple rows, sooner or later! I have seen many Triggers, especially those ones that implement domain integrity constraints, which were not thought to work on multiple rows. This mistake, in certain cases, produces the storing of incorrect data (an example will follow).

Suppose that you have to develop a DML AFTER INSERT Trigger to avoid to store values lower than 10 in the SafetyStockLevel column of the Production.Product table in the AdventureWorks2014 database. This customized business logic may be required to guarantee no production downtime in your company when a supplier is late in delivering.

The following piece of T-SQL code shows the CREATE statement for the Production.TR_Product_StockLevel Trigger.

USE [AdventureWorks2014];

GO

 

CREATE TRIGGER Production.TR_Product_StockLevel ON Production.Product

AFTER INSERT AS

BEGIN

  /*

    Avoid to insert products with value of safety stock level lower than 10

  */

  BEGIN TRY

    DECLARE

      @SafetyStockLevel SMALLINT;

 

    SELECT

      @SafetyStockLevel = SafetyStockLevel

    FROM

      inserted;

 

    IF (@SafetyStockLevel < 10)

      THROW 50000, N'Safety Stock Level cannot be lower than 10!', 1;

  END TRY

  BEGIN CATCH

    IF (@@TRANCOUNT > 0)

      ROLLBACK;

    THROW; -- Re-Throw

  END CATCH;

END;

GO

 

A very good habit, before applying Triggers and changes (in general) in the production environment, is to spend time to test the Trigger code, especially for the borderline cases and values. So, in this example you have to test if this Trigger is able to reject each INSERT statement that tries to store values lower than 10 into SafetyStockLevel column of the Production.Product table. The first test you can do, for example, is trying to insert one wrong value to observe the error caught by the Trigger. The following statement tries to insert a product with SafetyStockLevel lower than 10.

USE [AdventureWorks2014];

GO

 

-- Test one: Try to insert one wrong product

INSERT INTO Production.Product

(Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,

 ReorderPoint, StandardCost, ListPrice, DaysToManufacture,

 SellStartDate, rowguid, ModifiedDate)

VALUES

(N'Carbon Bar 1', N'CB-0001', 0, 0, 3 /* SafetyStockLevel */,

 750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE());

 

As you expect, SQL Server has rejected the INSERT statement because the value assigned to SafetyStockLevel is lower than 10 and the Trigger Production.TR_Product_StockLevel has blocked the statement. The output shows that Trigger worked well.

Msg 50000, Level 16, State 1, Procedure TR_Product_StockLevel, Line 17

Safety Stock Level cannot be lower than 10!

 

Now you have to test the Trigger for statements that try to insert multiple rows. The following statement tries to insert two products: the first product has a wrong value for SafetyStockLevel column, whereas the value in second one is right. Let’s see what happens.

USE [AdventureWorks2014];

GO

 

-- Test two: Try to insert two products

INSERT INTO Production.Product

(Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,

 ReorderPoint, StandardCost, ListPrice, DaysToManufacture,

 SellStartDate, rowguid, ModifiedDate)

VALUES

(N'Carbon Bar 2', N'CB-0002', 0, 0, 4  /* SafetyStockLevel */,

 750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()),

(N'Carbon Bar 3', N'CB-0003', 0, 0, 15 /* SafetyStockLevel */,

 750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE());

GO

 

The output shows that the Trigger has worked well again, SQL Server has rejected the INSERT statement because in the first row the value 4 for the SafetyStockLevel column is lower than 10 and it can’t be accepted.

Msg 50000, Level 16, State 1, Procedure TR_Product_StockLevel, Line 17

Safety Stock Level cannot be lower than 10!

 

If you have to deploy your Trigger as soon as possible, you could convince yourself that this Trigger works properly, after all you have already done two tests and all wrong rows were rejected. You decide to apply the Trigger in the production environment; but what happens if someone or an application tries to insert two products, in which there is one wrong value put in an order that differs from the one you used in the previous test? Let’s see the following INSERT statement in which the first row is right and the second one is wrong.

USE [AdventureWorks2014];

GO

 

-- Test three: Try to insert two rows

-- The first row one is right, but the second one is wrong

INSERT INTO Production.Product

(Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,

 ReorderPoint, StandardCost, ListPrice, DaysToManufacture,

 SellStartDate, rowguid, ModifiedDate)

VALUES

(N'Carbon Bar 4', N'CB-0004', 0, 0, 18 /* SafetyStockLevel */,

 750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()),

(N'Carbon Bar 5', N'CB-0005', 0, 0, 6 /* SafetyStockLevel */,

 750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE());

GO

 

The last INSERT statement has been completed successfully, but inserted data do not respect the domain constraint implemented by the Trigger, as you can see in the following picture.

 

Picture 2 – Safety stock level domain integrity violated for product named “Carbon Bar 5”

 

The safety stock level value for the product named “Carbon Bar 5” doesn’t respect the business constraint implemented by the Trigger Production.TR_Product_StockLevel; this Trigger hasn’t been thought to work on multiple rows. The mistake is in the following assignment line:

SELECT

  @SafetyStockLevel = SafetyStockLevel

FROM

  Inserted;

 

The local variable named @SafetyStockLevel can contain only one value from the SELECT on the Inserted virtual table and this value will be the SafetyStockLevel value corresponding to the first row that is returned from the statement. If the first row (that one returned from the query) has a suitable value in the SafetyStockLevel column, the Trigger will consider right the others as well. In this case, not allowed values (lower than 10) from the second row on, will be stored anyway!

How can the Trigger’s author fix this issue? He can fix it by checking SafetyStockLevel value on all rows in the Inserted virtual table, and if the Trigger finds just one value which is not allowed it will return an error. Below here, there is the version 2.0 of the Trigger Production.TR_Product_StockLevel, it fixes the issue changing the previous SELECT statement in an IF EXISTS SELECT statement.

USE [AdventureWorks2014];

GO

 

ALTER TRIGGER Production.TR_Product_StockLevel ON Production.Product

AFTER INSERT AS

BEGIN

  /*

    Avoid to insert products with value of safety stock level lower than 10

  */

  BEGIN TRY

    -- Testing all rows in the Inserted virtual table

    IF EXISTS (

               SELECT ProductID

               FROM inserted

               WHERE (SafetyStockLevel < 10)

              )

      THROW 50000, N'Safety Stock Level cannot be lower than 10!', 1;

  END TRY

  BEGIN CATCH

    IF (@@TRANCOUNT > 0)

      ROLLBACK;

    THROW; -- Re-Throw

  END CATCH;

END;

GO

 

This new version is thought to work on multiple rows and it always works properly. However the best implementation for this business logic is by using CHECK constraint that is the best way to implement customize domain integrity. The main reason to prefer CHECK constraints instead of the Triggers, when you have to implement customize domain integrity, is that all constraints (such as CHECK, UNIQUE and so on) will be checked before the execution of the statement that fires it. On the contrary, AFTER DML Triggers will fire after the statement has been executed. As you can imagine, for performance reasons, in this scenario, the CHECK constraint solution is better than the Trigger solution.

Trigger debug

The most important Programming Languages have debugging tools integrated into the development tool. Debugger usually has a graphic interface that allows you to inspect the variables values at run-time to analyze source code and program flow row-by-row and finally to manage breakpoints.

Each developer loves debugging tools because they are very useful when a program fails in a calculation or when it returns into an error. Now, think about a Trigger that performs a very complex operation silently. Suppose that this Trigger works into a problem; probably, this question comes to your mind: “Can I debug a Trigger” and if it is possible, “How can I do it?”

Debugging a Trigger is possible with Microsoft Visual Studio development tool (except Express edition).

Consider the first version of the Trigger Production.TR_Product_StockLevel created in the section “Triggers must be thought to work on multiple rows” at the beginning of this article. As you have already seen, the first version of that Trigger doesn’t work well with multiple rows because it hadn’t been thought to work with multiple rows. The customer in which you deployed that Trigger complains that some products have the safety threshold saved in the SafetyStockLevel column lower than 10. You have to debug that DML AFTER INSERT Trigger, below here you will learn how to do it.

The first step to debug a Trigger is to create a stored procedure that encapsulates the statement that is able to fire the Trigger that you want to debug. Right, we have to create a stored procedure that performs an INSERT statement to the Production.Product table of the AdventureWorks2014 database. The following piece of T-SQL code creates the Production.USP_INS_PRODUCTS stored procedure in the AdventureWorks2014 database.

USE [AdventureWorks2014];

GO

 

CREATE PROCEDURE Production.USP_INS_PRODUCTS

AS BEGIN

  /*

    INSERT statement to fire Trigger TR_Product_StockLevel

  */

  INSERT INTO Production.Product

  (Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,

   ReorderPoint, StandardCost, ListPrice, DaysToManufacture,

   SellStartDate, rowguid, ModifiedDate)

  VALUES

  (N'BigBike8', N'BB-5388', 0, 0, 10 /* SafetyStockLevel */,

   750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()),

  (N'BigBike9', N'BB-5389', 0, 0, 1  /* SafetyStockLevel */,

   750, 0.0000, 62.0000, 0, GETDATE(), NEWID(), GETDATE());

END;

 

The second step consists in the execution of the stored procedure, created in the previous step, through Microsoft Visual Studio.

Open Microsoft Visual Studio and surf into SQL Server Object Explorer, open the AdventureWorks2014 database tree, expand Programmability folder and try to find out the Production.USP_INS_PRODUCTS stored procedure into Stored Procedures folder. Next, press right click on Production.USP_INS_PRODUCTS stored procedure, a context pop-up menu will appear and when you select the item “Debug Procedure…”, a new SQL Query page will be open and it will be ready to debug the stored procedure as you can see in the following picture.

 

Picture 3 – Debugging USP_INS_PRODUCTS stored procedure through Microsoft Visual Studio

 

The execution pointer is set to the first executable instruction of the T-SQL script automatically generated by the Visual Studio Debugger Tool. Using step into debugger function (F11) you can execute the Production.USP_INS_PRODUCTS stored procedure step-by-step up to the INSERT statement that will fire the Trigger you want to debug. If you press step into button (F11) when the execution pointer is on the INSERT statement, the execution pointer will jump into the Trigger, on the first executable statement, as shown in the following picture.

 

Picture 4 – Breakpoint within a Trigger

 

Debugger execution pointer is now on the first executable statement of the Trigger, now you can execute the Trigger’s code and observe variables content step-by-step. In addition, you can see the exact execution flow and the number of rows affected by each statement. If multiple Triggers fire on the same statement, the Call Stack panel will show the execution chain and you will be able to discover how the Trigger’s code works.

Statements that each Trigger should have

A Trigger is optimized when its duration is brief, it always works within a transaction and its locks will remain active till the transaction will is committed or rolled back. As you can imagine, the more time the Trigger needs to execute, the higher the possibility that the Trigger will lock another process in the system will be.

The first thing you have to do to ensure that the Trigger execution will be short is to establish if the Trigger has to do something or not. If there are no rows affected in the statement that has called the Trigger, this means that there are no things for the Trigger to do. So, the first thing that a Trigger should do is to check the number of rows affected by the previous statement. The system variable @@ROWCOUNT allows you to know how many rows have been changed by the previous DML statement. If the previous DML statement hasn’t changed the rows, the value of the system variable @@ROWCOUNT will be zero, so that there are no things that the Trigger has to do except giving back the control flow to the caller by the RETURN (T-SQL) command.

The following piece of code should be placed at the beginning of all Triggers.

IF (@@ROWCOUNT = 0)

  RETURN;

 

Checking the @@ROWCOUNT system variable allows you to verify if the number of rows affected is the number you expect, if not, the Trigger can give back the control flow to the caller. In a Trigger active on multiple statement, you can query the virtual table Inserted and Deleted to know the exact number of inserted and updated (or deleted) rows.

After that, you should consider that for each statement executed, SQL Server sends back to the client the number of rows affected, so if you aren’t interested about the number of rows affected by each statement within a Trigger, you can set to ON the NOCOUNT option at the beginning of the Trigger and at the end you can flip back the value to OFF. In this way, you will reduce network traffic dramatically.

In addition, you could check if interested columns are updated or not. The UPDATE (T-SQL) function allows you to know if the column passed by is updated or not (within an update Trigger) and if the column is involved into an INSERT statement (within an insert Trigger). If the column is not updated, the Trigger has another chance to give back the control flow to the caller or it goes on. In general, an update Trigger has to do something when a column is updated and its values are changed; if there are no changed values, probably the Trigger has another chance to give back the control flow to the caller. You can check if the values are changed by querying the virtual tables Inserted and Deleted.

Summary

Triggers seem easy to write, but writing efficient Triggers as demonstrated is not simple task. A best practice is to test them thoroughly before the deployment in your production environment. A good habit is putting inside them lots of comments, especially before complex statements that may confuse even the trigger writer.

 

About the author

Since 1999 Sergio Govoni has been a software developer; in the 2000 he received degrees in Computer Science from The Italy State University. He has worked for over 11 years in a software house that produces multi-company ERP on Win32 platform. Today, at the same company, he is a program manager and software architect and he is constantly involved on several team projects, where he takes care of the architecture and the mission-critical technical details.

Since 7.0 version he has been working with SQL Server and he has a deep knowledge of Implementation and Maintenance Relational Databases, Performance Tuning and Problem Solving skills. He also works training people on SQL Server and its related technologies, writing articles and participating actively, as speaker, at conference and workshops UGISS (www.ugiss.org), the first and most important Italian SQL Server User Group. He has the following certifications: MCP, MCTS SQL Server.

Sergio lives in Italy and loves to travel around the world. When he is not at work to deploy new software and increase his knowledge of Technologies and SQL Server, Sergio enjoys spending time with his friends and with his family. You can meet him at conferences or Microsoft events. Follow him on Twitter or read his blogs in Italian and English

 

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.


New and Renewed MVPs Announced - Happy New Year!

$
0
0


Today, 1015 exemplary community leaders around the world were notified that they have received the MVP Award! These individuals were chosen because they have demonstrated their deep commitment to helping others make the most of their technology, voluntarily sharing their passion and real-world knowledge of Microsoft products with the community.

While there are more than 100 million social and technical community members, only a small portion are selected to be recognized as MVPs. Each year, around 4,000 MVPs are honored. They are nominated by Microsoft, other community individuals, or in some cases themselves. Candidates are rigorously evaluated for their technical expertise, community leadership, and voluntary community contributions for the previous year. They come from more than 90 countries, speak over 40 different languages, and are awarded in more than 90 Microsoft technologies. Together, they answer more than 10 million questions a year!

MVPs are recognized each quarter for this annual award, which continues to grow and evolve to reflect the development of Microsoft technologies.

Congratulations to the new MVPs, and welcome back to renewed MVPs. We are very excited to recognize your amazing accomplishments!

Top 5 MVP Monday Blogs of 2014

$
0
0

 

1. Calendar Federation with an Exchange Hybrid

photo  photo

Office 365 MVP Loryan Strant and Exchange Server MVP Michael Van Horenbeeck

In this blog post Exchange Server MVP Michael Van Horenbeeck and Office 365 MVP Loryan Strant will attempt to bridge the divide between the on-premises world and the cloud – and why sometimes it doesn’t always work as we expect.

A great feature of Exchange Server 2010 and 2013 is the ability to share calendar Free/Busy information with users outside of your organisation. This is also the case with Office 365 – and in fact is even easier for users to do.

However in the situation of a hybrid between Exchange Server and Exchange Online there can be a very small caveat that can cause confusion. To continue reading, click here



2. PowerPoint and Excel: Perfect Partners for Dynamic Tables and Dashboards

photo

PowerPoint MVP Glenna Shaw 

PowerPoint 2010 and PowerPoint 2013 introduced improvements to a lot of features, but sadly PowerPoint Tables was not one of them (and I can say the same for Word Tables). In both apps, pretty much the only thing you can do with tables is add rows and columns, type in the information and, either apply a pre-existing style or laboriously create your own style which you can’t even save as a template.

So what‘s the alternative? In PowerPoint there is a little talked about feature under the Insert, Tables section of the toolbar called Insert Spreadsheet. To continue reading, click here



3. Using OneNote with the Surface Pro3

photo

SharePoint MVP Corey Roth 

The Microsoft Surface Pro 3 is an amazing device and OneNote works quite well on it.  When it comes to using OneNote with the Surface Pro 3, you have the option of using the OneNote 2013 (the desktop application) and the Windows 8.1 Store application.  By default, it will use the Windows Store app when you activate it with your stylus.  This can be changed though (read on).  The feature sets between these two applications vary. This article is going to cover note taking on the Surface Pro 3 as well as what the handwriting experience is like using both applications to help you decide which works best for your needs.  To continue reading, click here 



4. Using ObjectDataSource with ASP.NET ListView for Entity Framework 6

photo

 

Visual C# MVP Ming Man Chan
 

This article consists of three subsection:

  • Create an ADO.NET Entity Data Model
  • Add a model class for Object Data Source binding
  • Add and configure Object Data Source and ListView

When you are try to create an EntityDataSource in VS 2013 you will have this error:

The provider did not return a ProviderManifest instance in VS 2013 and EntityFramework 6

In order to resolve this one of the choices is use ObjectDataSource. To continue reading, click here

 

5. Better Best Bets with Lists in SharePoint 2013

photo 

SharePoint MVP Mikael Svenson 

Best Bets or Promoted results is a great way to give end-users the answer to their search query without the additional click to open a page or document to locate the precise information they were looking for. Bing and other search engines already have many examples of this if you for instance look for weather information or currency conversion. To continue reading, click here

 

New Microsoft Dynamics CRM 2015 Multi-Entity Search

$
0
0

Editor’s note: In partnership with Microsoft Press, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from Dynamics CRM MVP Gus Gonzalez which is the 48th in the series.  

Find Customer Records Faster and Easier with the New Microsoft Dynamics CRM 2015 Multi-Entity Search.

When the Mobile Companion Application (MoCA) was introduced for Microsoft Dynamics CRM 2013, the Global Search capabilities instantly became my favorite feature on the application. So how do you think I felt when it was announced that Global Search - a.k.a. Multi-Entity Search, Global Quick Find, and a few other names - was going to be available for Microsoft Dynamics CRM 2015 web and Outlook clients?

If you said “I bet Gus felt very excited” you would be partially right – I was also concerned. I was concerned about the performance we would get when searching a keyword on a large Microsoft Dynamics CRM 2015 organization.

Those concerns were eliminated when I got my hands on a Microsoft Dynamics CRM 2015 organization. Performance was amazing, and I’ll tell you why.

The reason why Global Search works is due to the very smart guys working on the Microsoft Dynamics CRM product team at Microsoft. They based Global Search on a well-known platform used to search for keywords, Quick Find. Instead of creating a brand new way of finding data in CRM, they decided to utilize the conditions set on Quick Find views for the entities selected for Global Search. With the improved performance features of Microsoft Dynamics CRM 2015, doing a quick find in the system is faster than ever before, so conducting multiple quick finds feels like running a single quick find in CRM 2013. Very impressive work.

Performing multiple quick finds for a keyword at the same time can cause problems if you have for example 120 custom entities on top of the out of the box entities in CRM and you were trying to search all those entities at the same time right? – Well, the product team added a safeguard to CRM limiting Global Search to a maximum of 10 entities which takes care of that problem. A System Administrator can configure which entities are used by Global Search on the System Settings. I’ll show you how later.

So this is how it works, if you need to find information regarding a customer, a case, a phone call, etc. You can type a keyword on the Top Search box and let Global Search compile the results for you:

 

Because the Global Search is based on Quick Find views configuration and capabilities, the user can also use an asterisk (*) as a wildcard character to search for partial text!

Let’s use *Ski as an example. I enter *Ski as a Search Keyword, and then press Enter:

 

Here are the results that I get:

 

Global Search found one Account that contains the word “Ski”, and two contacts that also match that word. In this case those two contacts’ parent account is Alpine Ski House. Notice the (+) sign next to each entity name, this allows you to “Quick create” one of those records directly from Global Search results.

To do more sophisticated searches, the user can click the new Advanced Find icon, which the product team has placed permanently on the navigation bar!  Continue reading full article here 

 

About the author

Gus Gonzalez is a 3-time Microsoft Dynamics CRM MVP. He has over 10 years of consulting experience in the IT Industry, designing and implementing Microsoft Solutions ranging from enterprise to small environments. Worked as a full time Microsoft Certified Trainer from 2005 to 2009 teaching Microsoft Official Curriculum classes regarding Microsoft Windows Server, Exchange Server, ISA Server, and SMS Server along with Citrix, Cisco, EC-Council, and CompTIA official curriculum. A CRMUG All Star, and Microsoft Dynamics CRM Community Guest Columnist passionate about User Adoption, he believes in simplicity rather than complexity when implementing Microsoft Dynamics CRM at organizations of any size to drive User Adoption. Follow Gus on twitter @GusGonzalez2 

About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

Insider’s Guide to Managing Service Level Agreements with Dynamics CRM 2015

$
0
0

Editor’s note: The following post was written by Dynamics CRM MVP Steven Foster 

Insider’s Guide to Managing Service Level Agreements with Dynamics CRM 2015 

How do you as a business manage Service Level Agreements (SLAs)? How can you utilise Dynamics CRM to manage multiple SLAs against a Case and monitor the performance against them? This post will discuss the ins and outs of setting up and working with SLAs from a business perspective. 

Dynamics CRM 2015 provides the ability to manage Sales activity, Marketing activity and Service Activity. This post will focus on Service activity with a customer and specifically related to creating and managing SLAs.

An SLA or Service Level Agreement determines the level of service you will provide your customers. A level of service can be dependent on a number of factors including but not limited to:

-          Whether the Customer is entitled to service via an entitlement. An entitlement could be a something you sell or is included in a product offering i.e. you get 9-5 service with a response time of 8 hours

-          Based on an attribute of the customer i.e. high value customers get x service level and low value customers get y service level

-          Based on an attribute on the Case i.e. if the Priority or Severity is high then provide x level of service

-          Different channels of communications could drive a different level of service i.e. a post on Twitter could require a response in under 30 mins, whereby a response via email requires a response in 4 hours

-          Based on any other attribute related to the Case or associated records i.e. Customer details, product details etc.

 

As an organisation, you want to measure your levels of service to ensure that you are performing well against them and ideally meeting them. If you are not meeting them you want to be able to monitor why you are not i.e. is a particular team or user, or customer type or product type that is the issue.

Dynamics CRM 2015 enables you to both create multiple SLAs aligned to any attribute, channel or entitlement and monitor through views and charts on how you are performing against them.

I will now walk through the components that make up SLAs, how you create them, how they are executed and how you report on them.

Types of SLAs

Dynamics CRM 2015, provides for 2 SLA types as below:

-          Standard – standard SLAs were introduced as part of the CRM 2013 – Q2 2014 Spring release (Leo). They required a number of manual steps to get working, for example including in the warning actions updating the SLA status field appropriately, so that the timer field can behave as expected.

-          Enhanced – enhanced SLAs were introduced with the CRM 2015 release and removed some of the manual processes of the standard and introduced a number of new enhanced features.

 

Note: It is recommended to only use one type in an implementation and not mix and match, for consistency and ongoing management. If you have not used SLAs to date it would be recommended to use the enhanced SLAs over the Standard SLAs. If you have used Standard and wish to move to enhanced, you have to recreate the SLAs. Please note you should consider how your data and views are set up currently to report on the data captured.

The feature matrix below details the differences between the two:

Feature

Standard SLA

Enhanced SLA

First Response By and Resolve By dates and statuses maintained on Case Entity

Yes

No

Use of new SLA KPI related records

No

Yes

Use of Quick view cards to display timers

No

Yes

Timer display added to Case Form

Yes

No

Ability to pause and resume SLA

No

Yes

Start date, warning, failure and succeeded date stored

No (partial only Failure date stored)

Yes

Last on hold time and hold minutes supported

No

Yes

Business Hours support

Yes

Yes

Entitlement Support

Yes

Yes

Ability to create custom SLA KPI measures

No

Yes

Ability to add success actions

No

Yes

Ability to add warning actions

Yes

Yes

Ability to add failure actions

Yes

Yes

Support for default SLA

Yes

Yes

Update the SLA status via the SLA Item rules

Yes

No

SLA statuses updated automatically

No

Yes

 

Components that make up an SLA

The following components make up an SLA:

-          Customer Service Calendar – defines the business hours to be used in the SLA calculation. A business can have multiple calendars to support different SLAs i.e. 24x7, 9-5 business days, Customer specific calendars

-          Entitlements – defines the agreed number of Cases or Time that has been contracted with the customer for support. Entitlements can only be associated to one SLA. An SLA can be associated to many Entitlements.

-          Service Configuration – define which statuses place an SLA on hold (shortcut to System Settings). Note that the pause and resume statuses apply to all SLAs and thus are not SLA specific.

-          Holiday Schedule – defines the dates that are considered holidays in a particular region. If you work in multiple regions then you may have multiple holiday schedules. Holiday schedules can be applied to the Customer Service Calendar to be observed.

-          Service Agreements – define the rules that apply to the agreement, when a SLA is triggered, when and what should occur if the SLA is successful, non-compliant or nearing non compliancy.

 

Creating an Enhanced SLA

To create an enhanced SLA you need to have first created your customer calendar schedules if applicable so that they can be selected as part of the SLA definition.

You should consider the following before creating SLAs:

-          If a customer does not have an entitlement what default SLAs should be applied? Only one default SLA can exist in the system, so all general SLA business rules will need to be included in the default SLA.

-          Should the SLA dates be calculated off the created or modified date?

-          Do we allow SLAs to be paused and if so what statuses are permitted?

-          How will my SLAs be structured? I.e. by product, by customer segment, by customer value, by type of case, by entitlement type, by priority / severity. It is crucial to have consistency of rules so that you and your customers understand how they applied.

-          What should happen if an SLA is nearing the warning time or failure time, what processes should be initiated on failure for example and does it depend on the type of SLA?

 

Once you have determined your business rules you are ready to configure them in Dynamics CRM.

To create an SLA record, follow the steps below:

  1. Navigate to System Settings -> Service Management
  2. Select Service Level Agreements
  3. Create New
  4. Complete the form, entering your specific information:

 

 5. Save the new SLA to enable the SLA Details area

 6. Select the “+” to add a new SLA detail record, complete with your data as described below:

 
7. Note you can over type the Failure and Warn after with your own defined values as long as you keep the same syntax i.e. x hours or y days
8. To enable the Success, Warning and Failure actions, you need to first save the record.

 
9. Repeat step 6 for each business rule. An SLA can include multiple business rules. If this will be the default SLA you will need to include all the default SLAs rules in this one SLA as only one SLA can be defined as default i.e. not associated to an Entitlement.
10. In my example I have defined multiple First response by and Resolve by KPIs by Customer category.

 

11. Once you have created the rules, activate the SLA, by clicking Activate

12. If it is to be the default SLA i.e. the SLA to be used when no entitlement is selected, click “Set as Default”

Note: Only one SLA can be defined as the Default.

 

SLAs in Action

Now that the SLA has been activated, any new Cases that meet the criteria will have the SLA applied.

The screenshot below is of a Case related to a premium customer. You can see two SLA KPIs created for First and Respond by SLAs, the applicable countdown timers and the details if the case has been on hold. The timers will automatically change colour from black to orange to red and to Green based on the warning, failure and success criteria.

Note: The Last hold date and minutes are not displayed by default on the Case form. You will need to add these to the case form to view them.

 

Each SLA that is triggered against a case is reflected as a workflow on the case. The clauses that you defined based on trigger for success, warning and failure are converted in Wait and If conditions. So if we take a look at the workflows associated to this case we will see two workflows and if we open one up you will see the conditions as defined in setting up the SLA depicted in workflow logic. It’s a great place to look if for some reason your expected SLA KPI is not working as expected.

 

 

Advanced Topic – Adding a new KPI measure

So we have discussed creating an enhanced KPI for the standard First Response by and Respond by KPIs, but what if you want to measure another KPI like for example Decision Date or Second Response by?

With Enhanced SLAs for Dynamics CRM 2015 you can now create your own KPIs. So what do you need to know?

-          The SLA KPIs are defined as lookups to the SLA KPI Instance, so by default there are two lookups one for Respond by and another for First Response by.

-          The timer is displayed by using a quick view for each SLA KPI. When a quick view is added to the case form you define the lookup field for which the quick view will display

-          Any new look-up to the SLA KPI instance are dynamically displayed on the SLA item form when defining the SLA details. So to add a custom SLA item you need to first create and publish the new relationship

 

So to create a new SLA KPI for Decision Date you will need to:

  1. Open up the customisations area – Settings -> Customisations -> Customise the solution
  2. Open the SLA KPI Instance entity and create a new 1:N to Case for the new SLA KPI, providing the name of the new KPI i.e. in this example Decision Date By KPI
  3. Open up SLA KPI Instance Forms and copy one of the existing Quick Views by using Save As
  4. Open the new quick view form and change the labels to reflect the new SLA KPI
  5. Save and Publish
  6. Open up the Case form and add the new quick view form to the Enhanced SLA tab
  7. Select the new lookup field to SLA KPI Instance that you created in step 2

 
8. Save and Publish
9. You can now create a new SLA item against an existing SLA definition or new using the new SLA KPI Decision Date:

 

 

 10. Once you have created the SLA item and activated it, you will be able to view the new KPI Decision date on the Case Form as per the out of the box KPIs. You can as many new KPIs as required, triggered on             any condition.

 

 

Reporting on SLAs

 

You can report on SLAs by using the standard views, charts and excel. One key point to remember when you are reporting on Enhanced SLAs, is that the key fields to be reported on are stored in a separate relationship SLA KPI Instance.

To create a chart on first response by status, you will need to add the First Response SLA instance fields to the Case form. You can then include the fields on a chart. For example the chart below depicts the status of first response by all active cases:

 

 

You can report on any of the SLA date fields and review which cases are nearing non-compliance. You can create specific dashboards on SLAs so that you can monitor SLAs and performance against them.

IMPORTANT NOTE FOR ON PREMISE AND HOSTED ENVIRONMENTS

If you have CRM installed in an On Premise or Hosted environment (not CRM Online), then you need to be aware of a current issue with date calculations for non US based users. If you do not apply this fix you will notice that the date calculations for the SLAs may appear incorrect because of the date format being used. Unfortunately until a fix is applied to CRM you will need to ensure that the CRM servers running the web application are set to the US region as per this blog http://blogs.msdn.com/b/emeadcrmsupport/archive/2014/06/25/dynamics-crm-2013-and-date-time-not-rendered-as-expected-for-notes-related-to-case-records.aspx.

 

Once you have applied this change then the dates will be calculated as expected. A detailed community support blog can be found here detailing some of the issues that other users have experienced with a reference to this fix https://community.dynamics.com/crm/f/117/t/129027.aspx?pi49829=2#responses.

 About the author

Steven is the Product and Marketing Manager for the leading Microsoft partner Intergen in New Zealand. Steven is an MVP, Microsoft Partner Seller, has an active blog and is on the committee of the Dynamics CRM user group in Auckland.

Steven has been a speaker at many events across the world on the topic of CRM during his 15 years focusing on delivering and selling CRM solutions. Steven has multi industry expertise and has delivered both small and global CRM implementations.  Read his blog, connect with him on LinkedIn or follow him on Twitter.

About MVP Monday

 

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

Hierarchy Visualizations in Dynamics CRM 2015

$
0
0

Editor’s note: The following post was written by Dynamics CRM MVP Mayank Singla

CRM 2015 has introduced a new way to visually look at the relationship between records. This new feature is called Hierarchy Visualizations and is available for Account, Campaign, Case, Contact, Opportunity, Order, Product, Quote, Team, and User entities, but is only enabled out of the box for the Account, Product and User entities. However, this feature can be enabled for any other system or custom entities.

Before a Hierarchy Setting can be used, “Hierarchical” relationship must be marked to Yes for atleast 1: N relationship. To make this changes expand an entity tree view on Solution Customization screen and open a 1: N relationship.

Please note, this feature can be enabled only for 1: N self-referential Relationship only.

 

Next step is to setup Hierarchy Settings. Expand entity tree view, go to Hierarchy Settings and click on New.

 

Provide a Schema Name and Quick View Form. If a Quick View Form does not exist, click on Create New and setup a new Quick View form.

Please note, only first four fields from the Quick View Form will be displayed on the hierarchy tile. Also, only Hierarchy Setting per entity is allowed.

At this stage, any other existing self-referential can also be marked as Hierarchical Relationship. This can be achieved by clicking on “Mark a relationship as enable for hierarchies”. This opens following screen –

 

To mark any other self-referential relationship as Hierarchical, select the record and click on “Mark hierarchical”.

 

Once the Hierarchical Settings are configured properly, go to the Entity View

 

If the Relationship field configured above is populated you will notice          icon in front of the record. This signifies that a Hierarchical relationship view is available for that record. Click on the icon to view Hierarchical information –

 

Some key features on this screen are –

  • List on the left showing the hierarchy based on the tile selected.
  • Hierarchical tiles showing information based on the Quick View.
  • “Open this record” option on list as well tiles allowing user to directly open the record from this view.

Please note, View Hierarchy option is available on the top-right corner of a record if the Hierarchy field is populated.

 

Another enhancement is on the Advanced Find, allowing users to search for records “Under” and “Not Under”

 

 

 

 

 

About the author

 

Mayank is a Dynamics CRM MVP and has been working as a Senior Consultant since 2007. He has extensive experience of small, medium and complex end to end Dynamics CRM implementations. He is currently working with a consulting company in Auckland, New Zealand wherein he is responsible for Pre-sales, Analysis and Design, Requirements Gathering, Infrastructure planning, Development and Deployment of Dynamics CRM for various customers. He has previously worked with Microsoft India wherein he has worked on various products and technologies like Dynamics CRM 2013/2011, Scribe, ClickDimensions, Dynamics AX 4.0/2009, Windows Azure, WPF, WCF, Silverlight and SQL Server.

His MVP profile page - http://mvp.microsoft.com/en-us/mvp/Mayank%20Singla-5000599. You can add him on LinkedIn - https://nz.linkedin.com/in/mayanksingla or follow his blog - https://demystifyingcrm.wordpress.com

 

About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

MVP Community Camps - 244 MVPs, 2 Continents, Thousands of Participants

$
0
0

This weekend marks the start of something special in technical communities across Asia and Australia, the MVP Community Camps!  An estimated 5,000 IT, consumer and developer-minded technophiles will meet and discuss everything from Windows 10 to SQL Server.  The Microsoft MVP Community Camp (ComCamp) is scheduled in 28 cities in 7 countries throughout Asia and boasts 244 MVPs as presenters and speakers. 

"The MVP ComCamp is a unique opportunity to learn from industry experts, who works for leading organizations across the globe, on a variety of Microsoft and third party solutions," said Hyper-V MVP Alessandro Cardoso.  "The community will also have the opportunity to learn first hand the upcoming updates on Microsoft products and more importantly how that would benefit their business."

The goal for MVPs is to share knowledge and expertise in their own cities and countries across Asia and Australia. The goal for attendees is to expand their knowledge base of Microsoft technologies and services. One of the unique attribures of the ComCamps is that  MVPs will spend time answering questions and sharing valuable feedback with attendees in an intimate setting.


Each country will deliver various sessions based on speaker and attendee needs.  Some session will be presented in-person exclusively while others will provide streaming and video download options.  Register now for an MVP ComCamp near you!

 



 

 

 


 

MVPs Get in the Center at SCU

$
0
0
Next week, nearly a dozen MVPs will present during the 2015 System Center Universe (SCU) covering topics ranging from optimizing Azure Virtual Machines for Performance and Availability to Enterprise Security with Configuration Manager. SCU is a one-day, global event featuring Microsoft product team members and community experts.  This year's event will include two full breakout tracks to allow for additional content and speakers. 

"SCU is an event built by the community for the community," said event organizer and MVP Cameron Fuller.  "SCU is the only world-wide event which brings together more than 30 user group communities to provide a day of technical content for the benefits of both those communities and System Center / Azure focused people around the world."  MVPs play a crucial role in the development and execution of events like SCU and look forward to discussing tips and tricks as well as high-level Azure and System Center functionality. 

The event's keynote address will be delivered by Microsoft Enterprise Client & Mobility Corporate Vice President, Brad Anderson.  Check out the presenters page for a full list of sessions and MVP presenters and register to participate in-person or streaming online 

 


CRM Outlook Profile & CRM Online Trials

$
0
0

Editor’s note: The following post was written by Dynamics CRM MVP Donna Edwards 

I frequently setup Microsoft Dynamics CRM online trials for various reasons; training, demos, development, etc.  I often want to use the CRM outlook Client with the trials but do not want to use my Outlook work profile with the trial accounts. 

There are several strategies available for using an alternate Outlook profile for a Dynamics CRM Online trial.  One of the easiest I’ve found is to create an Outlook profile and configure Outlook to prompt for the profile you would like to use when starting Outlook.  Here’s the steps to create an alternative Outlook profile in Outlook 2013 and install the CRM Outlook client to that profile. 

This post assumes you have familiarity with and access to the Windows Control Panel and Mail features and you created an Office 365 and CRM Online trial account.  You need an Office 365 trial account because you need Exchange Online to configure the email account in Outlook. 

Open Mail from the control panel 

 

Select the Show Profiles button from the Profiles option 

 

Select the Add button 

 

Type a Profile Name and select OK 

Select Ok and the Add Account dialog window will open 

 

After completing the fields and adding your email address to your Online account, select Next and Outlook will begin configuring the email account. 

 

Select the Finish button to complete the process 

 

Select Ok after the mail account is added 

 

Important, after adding the email account, install the CRM Outlook Client.  If you have the Outlook Client already installed then I recommend uninstalling it before opening Outlook to ensure the CRM Outlook add-in gets associated to the new email account. 

You can now open Outlook.  You will be prompted for the profile to select.  Select the profile you just added for your CRM online account. 

 

You will be prompted to Add a Microsoft Dynamics Organization when opening Outlook  I select the CRM Online option 

 

You will be prompted again to sign in to Azure Services.  Enter your online credentials. 

 

Outlook will open and complete the CRM Outlook configuration for this profile. 

 

You can now use this CRM Outlook profile for prospect demo, training, helping users with issue, etc. 

Cheers

 

About the author

Donna has been working with Dynamics CRM application beginning with the 1.2 version.  She partner with all levels of an organization to develop and deliver flexible, scalable solutions that simultaneously address short-term business requirements and long-term strategic growth objectives. Her skill-set includes: Pre-sales support, solution design/architecture, functional consulting, requirements definition & analysis, business process engineering, process improvement & automation, end user adooption, system administrator and end user training, support and ISV solutions. 


About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

Creating a Lab on Windows Azure

$
0
0

Editor’s note: The following post was written by Dynamics SharePoint Server MVP Inderjeet Singh Jaggi  

Creating a Lab on Windows Azure

Recently when I got access to Azure I wanted to create my lab on Azure which took me days to figure out the process. It is similar to what we do on our Hyper V machines but a bit complex in nature because we are working on cloud

We follow the same process on our local machine to create a Lab environment but normally we are short of resources like  Disk Space, RAM, CPU which is the taken care by Azure. Biggest dis-advantage for our local system environment is when you buy these resources, you don’t use it all the time which we have already paid for. In Azure we pay only when you use it.

Today we will understand how we can create our own Lab environment on Windows Azure. The whole process is lengthy to explain so I have divided it in 8 different parts listed below. Hope you will enjoy learning it.

  • Connect to your Azure Subscription using PowerShell.
  • Create a Virtual Network in Azure
  • Create a server on Azure in the Virtual Network
  • Create a new domain
  • Update Domain setting on Virtual Network and bind it to Private IP on DC
  • When creating a new server it should be part of this domain
  • Install SQL, SharePoint. etc. labs as part of this domain
  • Troubleshoot common issue during this process

 

  • Connect to your Azure Subscription using PowerShell

You cannot create your lab with just Azure User Interface. It’s said that “Well planned is half done”. So let’s plan and prepare our self with Azure PowerShell.  Go to below link and download Microsoft Web Platform installer.

  1. You can use it download and install Azure PowerShell

http://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409 

 

 

 2. Once the PowerShell is downloaded and installed, we will get below message

 

 3. Now we will download the Azure Setting file to connect to our Azure subscription. In the Azure PowerShell, type below command.

Get-AzurePublishSettingsFile

 

 4. Once you login to your Azure website, you will get a file named "Visual Studio Premium with MSDN-date-credentials.publishsettings".

 

Note: File name may change based on your subscription. Save it to a secure location. This file contains secure credentials and additional information about subscriptions that you can use in your Azure environment.

 5. Now Import the Azure Settings file to PowerShell. Run "Import-AzurePublishSettingsFile" and then type path of the file we just downloaded as shown in below image.

 

 6. To check Subscription details run below command

Get-AzureSubscription

 

  • Create a Virtual Network in Azure.

Now let’s create a Virtual Network in Azure like we do in our Hyper-V servers. We need this Virtual network so that all the new and old Machine are in same IP range and can communicate with each other. You don’t need to be a Networking expert to do so. Just follow steps mentioned below.

 

  1. Login to your Azure Subscription Portal site https://manage.windowsazure.com
  2. Select “Network” from left hand menu > Select “New” on left bottom corner
  3. Now in the Menu select “Custom Create”
  4. Now type the network name and select the data center for that network(note all computers should be in the same datacenter)

 

 5. For now we will keep this DNS server empty (We should have our Domain controller DNS IP here. Our DC is not yet created so we keep it blank). Click on Next

 

 6. On "Virtual Network Address Spaces", provide name of your network and Subnet count (I won’t have more than 256 machines so selected same). Click on “Add Subnet”. Click on Finish

 

 

Here we have our Virtual Network created. Note this Network will then be used while creating our Virtual Machine. The very first machine we will create is our Domain controller. Once the domain controller is created we will update it in DNS server records used above.

 

  • Create a server on Azure in the Virtual Network

If you have worked on Hyper-V or any other Virtual server environment, this process is pretty simple for you. Here we will create our Virtual Machines. First machine would be our Domain Controller also known as DC. This server manage DNS, AD Users, etc. and normally the 1st and most important server in a company.

  1. Login to your Azure Subscription Portal site https://manage.windowsazure.com
  2. Select “Virtual Machines” from left menu and select “Create a Virtual Machine”

 

 

3. Under “Virtual Machine” select “From Gallery”

 

 

4. On the Choose an Image screen select “Windows server 2012 R2” and select Next

5. Type a unique name for the Virtual Machine. Select a username and password which you will never forget.(Selected a Low configuration because I won’t be using it much)

 

 

6. Now we have the most important step in the process. Select the DEMONETWORK you created in Region/Affinity Group/Virtual Network. Make sure our Virtual network name is selected in Virtual Network Subnet. Click on Next.

 

 

7. Now Select Finish.

8. Once the Machine is created, you will get a Connect button at bottom of screen.

 

  • Create a new domain

Now let’s configure this server to be a Domain Controller. As discussed earlier Domain controller manage DNS, AD Users, etc. and are normally the 1st and most important server in a company.

 

  1. Login to your Azure Subscription Portal site https://manage.windowsazure.com
  2. Select Virtual Machine from left Menu
  3. Select the Virtual Machine we just created and click on Connect at bottom
  4. Now RDP to the Virtual Machine we created and click on Server Manager

 

 

5. Select Manager > Add Roles and Features

 

 

6. Select “Next” on Installation Type screen

 

 

7. Select “Next” on Destination Server screen

 

 

8. Check Box “Active Directory Domain servers”

 

 9. Click on “Add Feature” on pop-up screen

 

 10. Select the check box “Restart the destination server if required” > Click OK

 

 11. Once it is installed, Click on Close

 

 12. Open the Server Manager again, Click on Flag at top and select “Promote this server to a Domain Controller”

 

 13. Select "Add a New forest", type domain name

14. Type the DSRM password and click “Next” (Note: this is required when you try to recover your active directory)

 

 15. Now you will have to select your domain name. For example. Demo.com

16. The NetBIOS Name screen will remove .com and you will see only demo.

17. Select the Default location of installation and click “Next”

18. Click on “Install”

 

 19. Once the Computer is restarted, your Domain Controller would be ready.

Note: From now on when you logon to server you need use your domain name along with username we just created to authenticate to server. Example: demo\user1

 

 

  • Update Domain setting on Virtual Network and bind it to Private IP on DC

So far we have created a Virtual Network and a Domain controller in that Virtual network. I hope this process was simple enough for you to follow. It is very important that Domain Controller and DNS have a static IP address so that all other machines know whom to contact to authenticate and resolve computer name. Now we need to make sure that our domain controller has a static IP address and then assign it to DNS on Virtual Network. To do so we will follow below steps:

 

  1. Login to your Azure Subscription Portal site https://manage.windowsazure.com
  2. Select Virtual Machine from left Menu
  3. Select the Virtual Machine we just created and click on Connect at bottom
  4. Now RDP to the Virtual Machine we created as a Domain Controller
  5. Open Windows PowerShell and run command “IPConfig”. Make a note of IP address
  6. Click on Start and Open DNS manager. Expand Computer name > Forward Lookup Zone > Demo.com > Make sure Ipconfig and DNS has same IP address.

 

 7. Make sure you are connected to Azure PowerShell account by following steps we discussed in section “Connect to your Azure Subscription using PowerShell”

8. Run command “Get-AzureVM”. You should be able to see your Azure Virtual Machine name, make a note of same

 

 9. Run below command in Azure PowerShell

Get-AzureVM -ServiceName “Service name from Get-AzureVM command” -Name “name from Get-AzureVM command”| Set-AzureStaticVNetIP -IPAddress "10.0.0.4" | Update-AzureVM

 10. Now the system will get restarted and a Static IP will be assigned to same. We need to update the same IP in the Virtual Network we created in Azure.

11. Login to your Azure Subscription Portal site https://manage.windowsazure.com

12. Select “Network” from left pane > Select the “Demo Network” we created > Click “Configure” on top.

13. Update the IP Address of Domain Controller and Domain name. E.g. demo 10.0.0.4. Click on Save at bottom to save these changes

 

 

  • When creating a new server it should be part of this domain

We are nearly done with our Domain related activities. Now we need to make sure that all the Virtual Machines we create are part of same Active Directory. This will help us to manage the server and users in domain. In number of Demos we need a domain (AD), so we have met the most important pre-requisites.

 

  1. You need to copy below script into a notepad and save it as join.PS1

///// SCRIPT START HERE   ////

 

$domainName = "type domain name here e.g. demo.com. It should be in quotes"

$username = "type domain\username e.g. demo\user1. It should be in quotes"

$password = "type password of above user e.g. 213@jjj. It should be in quotes"

$securePass =  ConvertTo-SecureString $password -AsPlainText -Force

$cred = New-Object System.Management.Automation.PSCredential($username, $securePass)

Add-Computer -DomainName $domainName -Credential $cred -Restart –Force

////// SCRIPT END HERE   ////

 

2. Login to your Azure Subscription Portal site https://manage.windowsazure.com

3. Select “Virtual Machines” from left menu and select “Create a Virtual Machine”

 

 4. Under “Virtual Machine” select “From Gallery”

 

 5. On the Choose an Image screen select “Windows server 2012 R2” and select Next

6.   Type the Machines necessary details like Hardware configuration, Machine name, username and password

 7.Now we have an important step in the process. Select the DEMONETWORK you created in Region/Affinity Group/Virtual Network. Make sure our Virtual network name is selected in Virtual Network Subnet. Click on Next

 

 8. Now comes the most important step, we need to select Custom Script on this page and select the Join.ps1 script we just created. Click Finish

 

 9. You will observer that your machine will reboot during the initialization process.

10.  Now when the machine is created, Open Server Manager and you will observe it is already part of Demo domain. On the domain controller in “Active Directory users and computers” you should be able to see the newly added

 

 

Note: In some rare scenario, the new computer may not get added to the domain. In that case, please run above PowerShell command line by line and the computer would be added to domain.

 

  • Install SQL, SharePoint. etc. labs as part of this domain

Once we are confirmed using script we are able to add new Virtual Machine to domain, we can use the ready to use SharePoint, SQL, etc. VM’s as part of our Lab environment. Let’s see an example of SharePoint server.

  1. Login to your Azure Subscription Portal site https://manage.windowsazure.com
  2. Select “Virtual Machines” from left menu and select “Create a Virtual Machine”

 

 3. Under “Virtual Machine” select “From Gallery”

 

 4. On the Choose an Image screen select SharePoint server 2013 Trial. This trial works for 180 days.

 

 5. Type the Machines necessary details like Hardware configuration, Machine name, username and password

6. Now we have an important step in the process. Select the DEMONETWORK you created in Region/Affinity Group/Virtual Network. Make sure our Virtual network name is selected in Virtual Network Subnet. Click on Next

 

 7. Now comes the most important step, we need to select Custom Script on this page and select the Join.ps1 script we just created. Click Finish

 

 8. Now you can see that even SharePoint server is ready within few minutes

9. Advantage here is that you have a single credential stored on Active directory using which you manage all the Virtual Machine and work on your lab.

10. Below you can see that I have created SQL server, SharePoint server and started using them in just couple of minutes.

 

 

 

  • Troubleshoot common issue during this process

Issue 1

Error message: No current subscription has been designated. Use Select-AzureSubscription -

Current <subscriptionName> to set the current subscription

 

Command : No current subscription has been designated. Use Select-AzureSubscription -
Current <subscriptionName> to set the current subscription.
At line:1 char:1
+ Test-AzureStaticVNetIP –VNetName test –IPAddress 11.0.0.1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Test-AzureStaticVNetIP], ApplicationException
+ FullyQualifiedErrorId :
System.ApplicationException,Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.TestA
zureStaticVNetIPCommand

 a. Did some research and found that we need to connect to Azure subscription and then run these command. To do so we need to download and install Azure PowerShell. Copy paste below URL on your browser and install the Web Platform Installer. This will give you option to download Azure PowerShell.

http://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409

 

 b. Once the PowerShell is downloaded and installed, we will

 

 c. Now we will download the Azure Setting file to connect to the azure server. In the Azure PowerShell, type below command.

Get-AzurePublishSettingsFile

 

 d. Once you login to your Azure website, you will get a file named "Visual Studio Premium with MSDN-date-credentials.publishsettings". Save it to a secure location. This file contains secure credentials and additional information about subscriptions that you can use in your development environment.

 e. Now Import the Azure Settings file to PowerShell. Run "Import-AzurePublisingSettingsFile" and then type path of the file we just downloaded as shown in below image

 

 

 f. To check Subscription details run below command

Get-AzureSubscription

 

 g. Now the command completed successfully.

 

 

Issue 2: When you logon to some server using Domain account, you cannot access the pre-installed software like SQL server.

 

Solution:

When the VM is created, it is created account you specify using Azure VM creation screen and then added to domain. So by default other account don’t have access to the application installed during VM creation. You have to 1st Logon to the server using Azure VM account give appropriate permission to domain account and then use it.

 About the author

Inderjeet Singh has around 10+ years of professional experience in SharePoint administration. He has good knowledge of migration, implementation and design architecture of SharePoint servers. He was amongst the 1st few contributor who hosted a custom App on SharePoint Store which was later removed. He worked on all versions of SharePoint (From 2001 to 2013). I write Sharepoint and azure articles on my website 

 

About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

SharePoint Power Searching Using ContentClass

$
0
0

Editor’s note: The following post was written by SharePoint Server MVP Mike Smith

SharePoint Power Searching Using ContentClass

 

"Power Searching" might be a bit of a stretch, but 99% of end users doing SharePoint searches don't know about this search option. Most searches performed by SharePoint users tend to look like general web searches; they enter a keyword or two, click the search button and get back 12,000 results. They then look at the first page or two of the results, and then give up. “Power searchers” can take advantage of search Managed Properties to refine their search on hundreds of properties like FileSize, LastModifiedTime and Author. Many of these properties are intuitive to SharePoint users, but a few like ContentClass are not “guessable” and a complete list of values is not to be found.

 

Searching with Content Class

If you are searching for broad categories of SharePoint content like all tasks or all announcements you can use either a search for a Content Type or a search for a ContentClass.

 

The choice depends on if you are looking for all "announcements" or “all things in announcement lists". As lists can have more than one content type, you will likely find more items using ContentType. (I often add the Announcement content type to my Task lists.)

Content Types are easy to work with as there is a list of them right in SharePoint: Settings, Site Settings, Content Types. The only trick needed when using ContentType is to put quotes around the content type name when there are spaces in the name. ContentClass is not as easy as there is no list of the codes in SharePoint. There's not even a complete list in the SharePoint API, MSDN or TechNet.

 

Using ContentClass

Let's say you want to find the announcement list named "Sales News". You try a general “Everything” search for "Sales News", and get back thousands of items. You put quotes around that text to search for an exact phrase, and get back a few hundred items. You are finding blog articles, Word documents and all kinds of content when all you wanted is just the list itself.

Now for the power of ContentClass… Try this search "contentclass=STS_List_Announcements". Now you are just getting a list of only announcement lists. Close, but there are hundreds of these in your SharePoint sites. Now try "contentclass=STS_List_Announcements Sales News" or "Sales News contentclass=STS_List_Announcements". There's your announcements list!

 

Tips:

  • You can write your query using wildcards or the contains operator (“:”) to broaden your searches. “contentclass:STS_List_” produces the same result as “contentclass=STS_List_*” and both will return a list of all your lists and libraries. Using just “contentclass:STS_List” will return all lists and all list items. (No “_” after “List”.)
  • When used with contentclass, “:” means “starts with” rather than “contains”. I.e., “contentclass:List” will return nothing.
  • As a default, when not searching “People” SharePoint will append an exclude to your query. I.e. a search for “Sales Teams” generates this query:
    “Sales Teams -ContentClass=urn:content-class:SPSPeople”.

 

Just what is ContentClass?

  • It’s a code that represents a type of crawled content and is stored as a search Managed Property.
  • It’s a documented property for SharePoint 2001 and 2003. In most of MSDN and TechNet it is just listed as “a string containing the content class value”.  (It’s also the name of an unrelated property used in Exchange, SharePoint QuickLinks class and elsewhere.)
  • It’s a list of codes found here https://msdn.microsoft.com/en-gb/library/ms975759.aspx. But that is an article from 2003 or thereabouts. There are hundreds of blog articles that have a list with 30 some entries. So far I have not been able to find a current and complete official list of ContentClass codes.
  • It is a search criteria usable just about anywhere you perform searches: the search box the end user sees, the Query Transform and Query editor of the search web parts, the REST API for search and CAML searches.

 

Had to Build My Own List!

Where do they come from? Somewhere in the indexing proces obviously, but I can't find documentation on that either. I started by digging through SharePoint: the API, SQL tables, and the files in the install directories, but I could not find a single complete and authoritative source.

Most lists of constants can be found in the MSDN or TechNet documentation, or from an enumeration object in the SharePoint API. But not ContentClass. I ran the SQL Profiler looking for "STS_" during searches… nothing. I did text searches through many of the SharePoint SQL tables (never do this on a production farm!)… nothing useful. In the end: I wrote PowerShell scripts to find all of the feature files and to discover the type numbers in the list template XML, searched all of my sites for lists and their properties, searched the web, and just did a lot of trial and error searching.

 

Multiple Naming Conventions

While I could not find a single source of the codes in SharePoint, I did find some hints, starting with how they are named. The strings used as ContentClass codes follow several different patterns:

  • STS_LIST_listname
  • STS_LIST_listIDnumber (from Type="000" in the feature file for the list templates)
  • STS_LISTITEM_listname
  • STS_LISTITEM_listIDnumber (from Type="000" in the feature file for the list templates)
  • urn:content-class:typeOfContent
  • urn:content-classes:typeOfContent

URN?

The Universal Resource Names (URN) used in ContentClass may have roots in SharePoint 2001 or maybe Exchange 2000 or older technologies. They do seem to be static as I have not seen any updates to this list in the last few versions of SharePoint. The URN codes primarily define content not found in lists and libraries. Except for “urn:content-class:SPSPeople” I have not even found any examples of their use. While the URNs are listed below, this article is going to focus on the “STS” items.

News Listing                       urn:content-class:SPSListing:News

People                                  urn:content-class:SPSPeople

Search Query                     urn:content-class:SPSSearchQuery

Category                              urn:content-classes:SPSCategory

Listing                                   urn:content-classes:SPSListing

Person Listing                    urn:content-classes:SPSPersonListing

Site Listing                           urn:content-classes:SPSSiteListing

Site Registry Listing         urn:content-classes:SPSSiteRegistry

Text Listing                         urn:content-classes:SPSTextListing

URN related notes:

  • Only one of these is used by the out of the box Result Sources, urn:content-class:SPSPeople.
  • When you create a Result Source you must select a “Type” option. This option appends either “urn:content-class:SPSPeople” or “-urn:content-class:SPSPeople” to the query transform.   

                                      

STS_LIST

While reviewing the ContentClass values that start with “STS_LIST” I noticed a pattern… they all seem to match the names found in the Microsoft.SharePoint.SPListTemplateType enumeration. Just take the name found there and prefix it with “STS_LIST_” and you have many of the STS_LIST contentclass codes! The only problem is that the enumeration’s list is not complete. There are many missing list types including “Assets” and “Pages”. Be aware that there are list types included in the enumeration that are not indexed by search or are hidden from the user and won’t be found in search. As a test, I have changed the status of a few hidden lists to Hidden=false and re-crawled the content and found that the predicted ContentClass codes will work. Example: STS_List_Timecard.

What about “Assets”? There is a documented “STS_LIST” for Asset libraries, but it’s not “STS_LIST_ASSETS”, it’s “STS_LIST_851”. There’s the hint for all of the missing list types. To find all of the Pages libraries all you need is to know the list’s number “850” and that the ContentClass code is then “STS_LIST_850”. These numbers can be found in the SPListTemplateType enumeration and in the Feature elements files that create the list templates.

 

My List

With lots of coffee, trial and error and detective work I have come up with the list below of ContentClass codes. As this list is related to, and in many ways a subset of, a bigger list of lists, I have created a merged list of Registration IDs, Features IDs and ContentClass codes for lists on my blog site.

Notes:

  • For each item in this list there is a matching “Items” value. For example, there’s both an STS_List_Announcement and an STS_ListItem_Announcement.
  • I have not included hidden lists as they are not visible to users from search. These are in the larger list on by blog.
  • Not all lists are available in all sites. The available lists depend on the template and enabled features.
  • Not all of the lists can be created in SharePoint 2013, at least without using code or PowerShell. Many of these lists are “deprecated” and are only still supported for upgrades from older SharePoint versions.
  • All searches are security trimmed.
  • Be cautious with contains (“:”) and wild card searches. Using “contentclass:STS_List” will return all lists, but will also return all list items. In most cases you will want to use equals (“=”) with ContentClass. A handy exception is when searching for tasks. As there are two types of task lists you may want to search using “contentclass:STS_List_Task” or “contentclass=STS_List_Task*”
  • “STS” probably stands for SharePoint Team Services from 2001.

 

Non-list ContentClass codes

Common name

Enum Name

List ID

ContentClass

Site Collections
(Search returns the home page of the top level sites.)                 

None

 

STS_Site

Note: This will include all MySites.

Webs

None

 

STS_Web

All lists

None

 

STS_List
This is not real, but often found in ContentClass lists on the web. It is only used with a “contains” or a wild card search.  ContentClass:STS_List or ContentClass=STS_List*
Note that this will return all lists and list items. A better search for all lists would be “STS_List_”.

All list items

None

 

STS_ListItem
This is not real, but often found in ContentClass lists on the web. It is only used with a “contains” or a wild card search.  ContentClass:STS_ListItem or ContentClass=STS_ListItem*

People

None

 

urn:content-class:SPSPeople

Note: Most SharePoint searches modify your query and add “-urn:content-class:SPSPeople” to exclude user profile content.

All root level ASPX pages. I.e. not in a library.  Example:
http://yourServer/sites/yoursite
/somepage.aspx

Does not return all documents!

None

 

STS_Document

 

List ContentClass codes

Common name

Enum Name

List ID

ContentClass

Custom List                 

GenericList

100

STS_List

Document Library          

DocumentLibrary

101

STS_List_DocumentLibrary

Survey List                  

Survey

102

STS_List_Survey

Links List                     

Links

103

STS_List_Links

Announcements List 

Announcements

104

STS_List_Announcements

Contacts List                

Contacts

105

STS_List_Contacts

Events List                  

Events

106

STS_List_Events

Tasks List                    

Tasks

107

STS_List_Tasks

Discussion List              

DiscussionBoard

108

STS_List_DiscussionBoard

Picture Library              

PictureLibrary

109

STS_List_PictureLibrary

Form Library (typically InfoPath forms)     

XMLForm

115

STS_List_XMLForm

Web Page Library          

WebPageLibrary

119

STS_List_WebPageLibrary

Data Connection Library

DataConnectionLibrary

130

STS_List_DataConnectionLibrary

Preservation Hold Library

 

131

STS_List_131

Project Tasks List         

GanttTasks

150

STS_List_GanttTasks

Promoted Links

 

170

STS_List_170

Tasks (2013 version)

TasksWithTimelineAndHierarchy

171

STS_List_TasksWithTimelineAndHierarchy

Agenda

Agenda

201

STS_List_Agenda

Attendees

MeetingUser

202

STS_List_MeetingUser

Decisions

Decision

204

STS_List_Decision

Objectives

MeetingObjective

207

STS_List_MeetingObjective

Text Box / Directions

TextBox

210

STS_List_TextBox

Things To Bring

ThingsToBring

211

STS_List_ThingsToBring

Blog Posts/ Tabs List

Posts

301

STS_List_Posts

Blog Comments

Comments

302

STS_List_Comments

Blog Categories

Categories

303

STS_List_Categories

App Catalog

 

330

STS_List_330

Apps for Office

 

332

STS_List_332

App Requests

 

333

STS_List_333

USysApplicationLog - Access

 

398

STS_List_398

MSysASO - Access

 

399

STS_List_399

Resources - Group Work Site

Facility

402

STS_List_Facility

Whereabouts - Group Work Site

Whereabouts

403

STS_List_Whereabouts

Phone Call Memo  - Group Work Site

CallTrack

404

STS_List_CallTrack

Circulations - Group Work Site

Circulation

405

STS_List_Circulation

Timecard (hidden) - publishing

Timecard

420

STS_List_TimeCard

StatusIndicatorList / KPIs

 

432

STS_List_432

Report Library

 

433

STS_List_433

Dashboard content

 

450

STS_List_450

Data Sources - Perfomance Point

 

460

STS_List_460

Dashboards

 

480

STS_List_480

Categories - community site

 

500

STS_List_500

Visio Repository Site Process Diagrams

 

506

STS_List_506

MicroBlogList (MicroFeed)

 

544

STS_List_544

My Site Documents

MySiteDocumentLibrary

700

STS_List_MySiteDocumentLibrary

Product Catalog

 

751

STS_List_751

Pages Library               

Pages

850

STS_List_850

Asset Library / Video Channel

 

851

STS_List_851

Video Channel Settings

 

852

STS_List_852

Hub Settings

 

853

STS_List_853

Members - community site

 

880

STS_List_880

Issue Tracking List        

IssueTracking

1100

STS_List_IssueTracking

DraftAppsListTemplate

DeveloperSiteDraftApps

1230

STS_List_DeveloperSiteDraftApps

EDiscoverySources

 

1305

STS_List_1305

EDiscoverySourceGroups / Sets

 

1307

STS_List_1307

EDiscoveryCustodians

 

1308

STS_List_1308

Custom Lists / Queries

 

1309

STS_List_1309

EDiscoveryExports

 

1310

STS_List_1310

Slide Library

 

2100

STS_List_2100

Acquisition History List

 

10099

STS_List_10099

 

List Item ContentClass codes

For each of the above STS_List_nameornumber there is also an STS_ListItem_nameornumber that returns only items in that type of list.  Example: STS_ListItem_Announcements.

 

Improving Your Users Search Experience

Why? Do a web search on “time wasted searching”…

Most users won’t want to learn or type ContentClass queries; you will need to help them. You could add Search web parts to the home page of the team sites to find lists of things they might be interested in. Something like a list of “Sites you have access to…” or maybe a list of all recent announcements from all the sites they have access to. You could add links to the Quick Launch area that are just links to a search results page with an appended query string. The easiest way to create the link URL is to perform a search and then just copy the URL.  (Tip: Quick Launch will truncate any URLs that include a “#” character.  Many of the more advanced searches will include a “#Default” in the URL.)

You could of course create a cheat sheet with tips and tricks and include the most common ContentClass codes useful in your organization.

 

In Summary

In my digging through SharePoint 2013 I have found over 100 out of the box list types. Many of these are obsolete or hidden from the user and search. The above list expands the common list of 30 or so frequently published ContentClass codes to over 120. But it’s not complete. I know that I have missed a few as I have not used every possible current and legacy site template or list template. Also remember SharePoint continues to evolve, especially Office 365 / SharePoint Online. Please share any other ContentClass codes that you discover!

I am building an update to my old List Registration ID blog article that will include the 100+ list types, their version support, feature names and IDs, and ContentClass codes and their visibility to search. So visit my blog and search for “ContentClass”. 

 

About the author

Mike is a SharePoint nut, Senior Technical Instructor at MAX Technical Training, SharePoint MVP and courseware author. He has worn many IT hats over the last thirty years including business owner, developer, technical writer, trainer, DBA and consultant. Mike is a Microsoft Certified Trainer (MCT) and specializes in SharePoint, PowerShell, SQL Server and .NET development and is a member of the Cincinnati SharePoint User Group leadership team. He is the author of two SharePoint books: SharePoint 2010 Security for the Site Owner, and SharePoint 2007 and 2019 Customization for Site Owners. He has authored over 150 training/courseware titles over his career with 7 courseware titles currently published in the Microsoft Courseware Library, his latest courseware title is Microsoft SharePoint 2013 Search Administration. His blog: Mike Smith's Tech Training Notes, offers years of tips, tricks, advice and random discoveries about SharePoint, PowerShell and more. 

 

About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

MVP Monday - SQL Server High Availability in Windows Azure Iaas

$
0
0

Editor’s note: The following post was written by Cluster MVP David Bermingham

SQL Server High Availability in Windows Azure Iaas

When deploying SQL Server in Windows Azure you must consider how to minimize both planned and unplanned downtime. Because you have given up control of the physical infrastructure, you cannot always determine when maintenance periods will occur. Also, just because you have given control of your infrastructure to Microsoft it does not guarantee that you are not susceptible to some of the same types of outages that you might expect in your own data center. To minimize the impact of both planned and unplanned downtime Microsoft provides what are called Fault Domains and Upgrade Domains. By leveraging Upgrade Domains and Fault Domains and deploying either SQL Server AlwaysOn Availability Groups (AOAG) or AlwaysOn Failover Cluster Instances (AOFCI) you can help minimize both planned and unplanned downtime in your SQL Server Windows Azure deployment. Throughout this document when I refer to a SQL Server Cluster, I am referring to both AOAG and AOFCI. When needed, I will refer to AOAG and AOFCI specifically.

Fault Domains are essentially “a rack of servers”, with no common single point of failure between different Fault Domains, including different power supplies and network switches. An Update Domain ensures that when Microsoft is doing planned maintenance, only one Update Domain is worked on at a given time. This eliminates the possibility that Microsoft would accidentally reboot all of your servers at the same time, assuming that each server is in a different Update Domain.

When you provision your Azure VM instances in the same Availability Set, you are ensuring that each VM instance is in a different Update Domain and Fault Domain…to an extent. You probably want to read Manage The Availability of Virtual Machines to completely understand how VMs get provisioned in different Fault Domains and Update Domains. The important part of the availability equation is ensuring that each VM participating in your SQL Server cluster is isolated from each other, ensuring that the failure of a single Fault Domain or maintenance in an Update Domain does not impact all of your Azure instances at the same time.

So that is all you need to know….right? Well, not exactly. Azure IaaS does not behave exactly like your traditional infrastructure when it comes to clustering. In fact, before July of 2013, you could not even create a workable cluster in Azure IaaS. It wasn’t until then that they released hotfix KB2854082 that made it possible. Even with that hotfix there are still a few considerations and limitations when it comes to highly available SQL Server in Windows Azure.

Before we dive into the considerations and limitations, you need to understand a few basic Azure terms. These are not ALL the possible terms you need to know to be an Azure administrator, but these are the terms we will be discussing that are specific to configuring highly available SQL Server is Azure IaaS.

Virtual Network

Before you begin provisioning any virtual machines, you will need to configure your Virtual Network such that all your SQL Server Cluster VMs can reside in the same Virtual Network and Subnet. There is an easy Virtual Network Wizard that will walk you through the process of creating a Virtual Network. Additional information about Virtual Networking can be found here.
http://azure.microsoft.com/en-us/services/virtual-network/

If you are considering a Hybrid Cloud deployment where you stretch your on premise network to the Azure Cloud for disaster recovery purposes, you may want to review my blog post below. http://clusteringformeremortals.com/2014/01/07/extending-your-datacenter-to-the-azure-cloud-azure/

As you will see below, It is required that each SQL Server cluster must reside in a dedicated Cloud Service (see Cloud Service section below) and clients must connect to from outside of the Cloud Service. When creating subnets, I would create a small subnet for each cluster I plan to create. These subnets will only hold a handful of VMs and will be used exclusively for the Cloud Services that contain your SQL Server clusters.

Availability Set

As previously mentioned, an Availability Set is used to define Fault Domains and Update Domains. When provisioning your SQL Servers and File Share Witness (more on this later) make sure put all of your virtual machines in the same Availability Set. Availability Sets are described as follows…

“An availability set is a group of virtual machines that are deployed across fault domains and update domains. An availability set makes sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers.”

Cloud Service

Before you go Bing “Azure Cloud Service”, you need to understand that there is the overall marketing term “Cloud Service”, which is all fine and good, but not what we are talking about here. A Cloud Service in Azure IaaS is a very specific feature that is described as follows…

“A cloud service is a container for one or more virtual machines you create. You can create a cloud service for a single virtual machine, or you can load balance multiple virtual machines by placing them in the same cloud service.”

The other thing about a Cloud Service is that it is addressable by a single public IP address. All virtual machines residing in a Cloud Service can be reached by the Public IP associated with the Cloud Service and the endpoint ports defined when you create the virtual machine. Later in this article we will also learn that it is this public IP address that will be used instead of the traditional Cluster IP Resource for connectivity to the cluster.

The thing to remember about highly available SQL Servers is that when creating highly available SQL Server instances, you will want to place ALL of your SQL instances and the File Share Witness in the same Cloud Service. It is required that you have a different Cloud Service for each additional SQL Server Clusters that you create. I also recommend that you reserve that Cloud Service for only the SQL Server cluster nodes and the File Share Witness. You will see later in this article that all SQL Server cluster clients will need to reside outside of the cluster’s Cloud Service, which is just one of the reasons to keep only the SQL cluster nodes and File Share Witness in a dedicated Cloud Service.

You can create a Cloud Service, join an existing Cloud Service, create an Availability Set or join an Availability Set at the time you provision your Virtual Machines as shown in Figure 1 below.

 

Figure 1 - Cloud Service and Availability Set are defined when creating your virtual machine

Configuration of SQL Cluster

Now that we have a base understanding of some of the Azure terminology, we are ready to begin the cluster installation. Whether you are configuring an AlwaysOn Availability Group Cluster or an AlwaysOn Failover Cluster Instance, you will need to start with a basic cluster configuration. If you are using Windows Server 2012 R2, you are good to go. If you are using Windows Server 2012 or Windows Server 2008 R2, you will first need to install hotfix KB2854082on each cluster node.

Assuming you have minimally completed the pre-requisites below, you are ready to create your cluster.

Pre-requisites

1. Create your Azure Virtual Network

2. Provisioned three VMS. We’ll call these VMs SQL1, SQL2 and DC1 for simplicity through the rest of this document

3. These VMs should all reside in the same Cloud Service and Availability Set

4. Applied hotfix KB2854082 if necessary (pre-Windows 2012 R2)

5. Created a Windows Domain and joined all servers to the domain

Creating a cluster is pretty straight forward; I won’t go into great detail here as it is the same as creating an onsite cluster. The one major difference is at the end of the process. You will see that the Cluster Name resource will fail to come online. The reason the Cluster Name Resource fails to come online is that Azure VMs get their IP Address information from DHCP, which will issue the same IP to the cluster. When the non-RFC-compliant DHCP service in Azure issues a duplicate IP address the Cluster IP Address resource to fail to come online. In order to fix this, you will need to manually specify another address that is not in use in the subnet. Because we have no control over the DHCP scope, I would choose an IP address that is near the end of the scope. This is another reason why I like to limit the Cloud Service to just the cluster nodes, so I don’t accidentally provision another VM that uses an IP address I have already specified for my cluster.

Because there is no shared storage in Azure, you will notice that the quorum configuration defaulted to Node-Majority. Node-majority for a two node cluster is certainly not optimal. You will need to configure a File Share Witness (FSW). In my example configuration, I configured the FSW on DC1. Wherever you configure the FSW you should ensure that the FSW is in the same Availability Set as the cluster nodes. This ensures that don’t have a failure of a cluster node and the FSW at the same time.

Now that you have configured the basic cluster, you will need to decide whether you want to deploy an AlwaysOn Availability Group (AOAG), or whether you want to deploy an AlwaysOn Failover Cluster Instance (AOFCI). To deploy an AlwaysOn Failover Cluster Instance you will need to use a 3rd party, cluster integrated replicated volume resource, such as SIOS DataKeeper Cluster Edition as there is currently no shared storage option in Azure suitable for clustering.

AOAG or AOFCI?

This post assumes that you are familiar with SQL Server AlwaysOn options, if not you should review High Availability Solutions (SQL Server)

While AOAG can meet the needs of many, there are certainly situations where AOAG does not fulfill the requirements. The chart below highlights some of the limitations of AOAG in comparison to AOFCI with SIOS DataKeeper Cluster Edition.

 

Figure 2 - AOAG vs. AOFCI with DataKeeper

In my experience, the two biggest reasons why people are deploying AOFCI rather than AOAG is the support for SQL Server Standard Edition and because it protects the entire SQL Server Instance rather than just the user defined databases. The later reason becomes even more important after you discover that Windows Azure only supports one client access point, meaning with AOAG all of your database must reside in a single Availability Group. It is also much easier to create one AOFCI and have every database, including System and MSDB, be replicated and protected rather than having to manually manage Agent Jogs, SQL user accounts and each database manually as you do with AOAG.

Configuring AOFCI and AOAG

Once again, the basic configuration of AOFCI or AOAG in the Azure Cloud is pretty much identical to how you would configure these services with on premise servers. (For detailed instructions on deploying a #SANLess cluster with DataKeeper visit my article Creating a SQL Server 2014 AlwaysOn Failover Cluster (FCI) Instance in Windows Azure IaaS). The difference comes when you are configuring the client access point. As we saw with the initial cluster creation process, the Cluster Name resource will fail to come online because the DHCP service will hand out a duplicate IP address. However, instead of simply specifying another address in the same subnet, you will need to set the Client Access Point IP address to be the same as the Cloud Service’s public IP address, with a host specific subnet mask of 255.255.255.255. Clients will then access this SQL Cluster via load-balanced VM endpoints with direct server return. The directions outlined in the Configuring the Client Access Point section below will tell you exactly how to put this all together.

Configuring the Client Access Point

Configuring the client access point and the load balanced endpoint is probably the most confusing or misunderstood part of creating SQL Server clusters in Windows Azure, or at least it was for me. If you are configuring AOAG you are in luck, there is a great article that walks you through this process Step-by-Step.

http://msdn.microsoft.com/en-us/library/dn425027.aspx

However, if you want to configure AOFCI, you have to take some of the information supplied in that article and apply it to AOFCI rather than an AOAG. You can follow Steps 1 through 3 as described in http://msdn.microsoft.com/en-us/library/dn425027.aspx to create the load balanced endpoints. However, when you get to Step 4 you will have to make adjustments since you will already have configured a client access point as part of your SQL Server Cluster Role configuration. On Step 4, “Create the availability group listener”, you can skip 1-6 and continue with 7 through 10 to change the IP address of the SQL Server Cluster resource. Once the IP address has been changed, you can bring the SQL Server Failover cluster instance online.

Accessing the SQL Cluster in Azure

As previously described, the SQL Server cluster must be accessed from outside of the Cloud Service via the load balanced endpoint. Depending upon which server is active, the load balanced endpoint will redirect all client requests to the active server. At the end of the day, your SQL Server cluster should look something like Figure 3 shown below.

 

Figure 3 - Clients accessing the SQL Server Cluster

What about Hybrid Cloud?

While this blog post is focused on High Availability in the Azure Cloud, it is certainly possible to build Disaster Recovery configurations which have some SQL cluster nodes in Azure and some nodes on premise. For more information on Hybrid Cloud configurations, read my article Creating a multi-site cluster in Windows Azure for Disaster Recovery. That article describes Hybrid Cloud solutions such as those pictured in Figure 4 below.

 

Figure 4 - Hybrid Cloud for Disaster Recovery

Summary

Windows Azure IaaS is a powerful platform for deploying business critical applications. All of the tools required to build a highly available infrastructure are in place. Knowing how to leverage those tools, especially in regards to providing High Availability for SQL Server, can take a little research and trial and error. I hope that this article has helped point you in the right direction and has reduced the amount of research and trial and error that you will have to do on your own. As with most Cloud Service, new features become available very rapidly and the guidance in the article may become outdated or even wrong in some cases rather rapidly. For the latest guidance, please refer to my blog Clustering for Mere Mortals where I will attempt to update guidance as things in Azure evolve.

About the author


David Bermingham is recognized within the technology community as a high availability expert and has been honored by his peers by being elected to be a Microsoft MVP in Clustering since 2010. David’s work as director of Technical Evangelist at SIOS has him focused on evangelizing Microsoft high availability and disaster recovery solutions as well as providing hands on support, training and professional services for cluster implementations. David hold numerous technical certifications and draws from over twenty years of experience IT, including work in the finance, healthcare and education fields, to help organizations design solutions to meet their high availability and disaster recovery needs. David has recently begun speaking on deploying highly available SQL Servers in the Azure Cloud and deploying Azure Hybrid Cloud for disaster recovery.

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

 

Being a UC Superhero with Lync QoE Superpowers

$
0
0

Editor’s note: In partnership with Microsoft Press, MVPs have been contributing to an ongoing guest series on their official team blog. Today’s article is from Lync MVPs Andrew Morpeth and Curtis Johnstone  which is the 49th in the series.  

Microsoft Lync, by virtue of being a Unified Communications solution, relies on an ecosystem of infrastructure to ensure a good user experience for rich real-time communication features such as voice and video.  A simple Lync audio call for example is dependent on the network, end-user devices, and many other components that can jeopardize quality.

Trying to pinpoint the underlying cause of poor user experiences in this ecosystem is very challenging; so much so, a new set of superpowers are required.  This article introduces you to the Lync Quality of Experience (QoE) data, and gets you started on learning how to leverage it to determine the causes of poor Lync media calls so you can take action and be a true UC superhero.

How can Lync QoE data help you?

Unlike traditional phone systems where a problem is typically isolated to a single system, the most common Lync dependencies that need to be considered when troubleshooting quality issues are:

  • Network Connectivity (including the status of routers and switches)
  • Lync client operating system hardware and software
  • Lync server operating system hardware and software
  • Endpoint devices
  • Voice Gateway’s (to an IP-PBX or PSTN breakout)
  • Internet Telephony Service Provider’s (ITSP)

Fortunately Lync has some in-depth reporting capability built-in out of the box (provided you have decided to install the Lync monitoring service). Endpoints involved in a conversation (this includes not only clients, but phones, servers and gateways) will report QoE data back to the monitoring server in as part of the SIP protocol (specifically a SIP SERVICE request) at the end of the call. This is very powerful because it gives Administrators the ability to view quality metrics about a particular Lync conversation from multiple vantage points without the need for complex end to end monitoring or routers, switches, etc.

A sample of the quality data collected by the Lync client and sent back to the Lync server for storage in the Lync QoE database is shown here in Figure A:

 

Figure A - Example of QoE data stored as XML that the client collects and sends back to the server

 

Note: if you did want more in-depth end-to-end monitoring (e.g. more network related metrics and metrics for the portion of a call outside of the Lync environment) the new Software-Defined Networking (SDN) API opens up even greater possibilities to monitoring solution vendors, allowing network monitoring information to be combined with the Lync QoE data.

What do all those Numbers Mean?

In order to truly get the most out of QoE metrics you need to understand what the metrics are, and what constitutes a good and bad Lync experience.

The QoE data can be viewed in several ways – directly in the database or in a Lync Reporting solution. Lync ships with a set of default reports which span usage and quality data. These are referred to as the Lync Monitoring reports and are part of the Monitoring service installation. These reports are not integrated with Active Directory and contain only the data that exists in the QoE database at the time (which depends on the QoE retention settings) but are still valuable with some knowledge and a little digging.  Continue reading full article here

 About the authors

Curtis Johnstone is a Senior Unified Communication (UC) product architect with Dell Software and a Microsoft Lync MVP. He has more than 15 years of Microsoft enterprise deployment and application development with Microsoft Lync, Exchange, and Office 365. He leverages his UC domain expertise in the Windows Server Management business unit at Dell to design, develop, and deliver world-classmanagement applications for Microsoft Lync, Exchange, and Office 365.  Curtis is also the founder and lead contributor of the popular Microsoft Lync blog www.insidelync.com.  Follow him on Twitter

Andrew Morpeth is a Unified Communications (UC) consultant and Lync MVP. Andrew has been working with Microsoft technologies for more than 10 years, and currently works for one of New Zealand's leading Lync integrators, Lexel Systems. Andrew is also a PowerShell lover and UC blogger at http://www.lync.geek.nz/.  Follow him on Twitter 

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

 

Viewing all 788 articles
Browse latest View live




Latest Images