Archive for the 'System Center' Category

Operations Manager 2012 R2 Management Pack Dependencies

I am not proficient at managing Ops Manager.  I am at best a competent tinkerer.  I have been needing to do some clean up on our Ops Manager installation, and clear some management packs that are not used, or just used to generate noise that we subsequently ignore.  That is a bit of an annoying task, with trying to trace down all the dependencies.

To make it easier, I looked for  a script to log the management pack dependencies, version and ID.  I found this post from 2009, but it wasn’t very effective in the current version of Ops Manager. http://www.systemcentercentral.com/list-management-packs-dependencies-powershell-script/

So I decided to write a new one.  Here is what I came up with:

New-SCOMManagementGroupConnection -ComputerName <Your Ops Mgr Server Name>

# Create a new Excel object using COM
$Excel = New-Object -ComObject Excel.Application
$Excel.visible = $True
$Excel = $Excel.Workbooks.Add()
$Sheet = $Excel.Worksheets.Item(1)

# Counter variable for rows
$intRow = 2

$Sheet.Cells.Item(1,1) = “Parent”
$Sheet.Cells.Item(1,2) = “MP Name”
$Sheet.Cells.Item(1,3) = “Version”
$Sheet.Cells.Item(1,4) = “ID”

 

$Sheet.Cells.Item(1,1).Font.Bold = $True
$Sheet.Cells.Item(1,2).Font.Bold = $True
$Sheet.Cells.Item(1,3).Font.Bold = $True
$Sheet.Cells.Item(1,4).Font.Bold = $True
$MPCollection = Get-SCManagementPack
foreach ($_ in $MPCollection) {
$intRow++
$MPParent = $_.Name
#     Write-Host $MPParent
$Sheet.Cells.Item($intRow,1) = $MPParent
$Sheet.Cells.Item($intRow,2) = “*************”
$MPChecking = Get-SCManagementPack -Name $MPParent -recurse
foreach ($_ in $MPChecking){
$intRow++
Write-Host $intRow
$MPName = $_.Name
#         Write-Host $MPName
$MPVersion = $_.Version
$MPID = $_.ID
#         Write-Host $MPID
$Sheet.Cells.Item($intRow,2) = $MPName
$Sheet.Cells.Item($intRow,3) = $MPVersion
$Sheet.Cells.Item($intRow,4) = “$MPID”
}
}

Error after upgrading Orchestrator to 2012 R2

After upgrading from System Center Orchestrator 2012 sp1 to 2012 R2, my Runbooks weren’t running via my scheduled tasks.  After some digging, I figured out that I couldn’t open the Orchestration Console, because I kept getting this error:

Error executing the current operation.
[HttpWebRequest_WebException_RemoteServer]
Arguments: NotFound
Debugging resource strings are unavailable. Often the key and arguments provide sufficient information to diagnose the problem. See http://go.microsoft.com/fwlink/?linkid=106663&Version=5.1.20913.0&File=System.Windows.dll&Key=HttpWebRequest_WebException_RemoteServer

I did some research, but couldn’t figure it out, and there was a particular Runbook that I needed to have run every night.  So I opened a ticket with Microsoft.  In order to save you the trouble, if you come across this issue, here is the solution:

1. Enable detailed logging for the connection attempt.

Create a folder to store the log file: C:\Logs in this sample:

    initializeData="C:\logs\SRV_Traces.svclog" />

Edit the Web.Config file located in the following default location:

C:\Program Files (x86)\Microsoft System Center 2012 R2\Orchestrator\Web Service\Orchestrator2012

======================

Part1 add the following just below section <configuration>

<system.diagnostics>

    <sources>

      <source name="System.ServiceModel"

              switchValue="Information, ActivityTracing"

              propagateActivity="true" >

        <listeners>

          <add name="xml"/>

        </listeners>

      </source>

      <source name="System.ServiceModel.MessageLogging">

        <listeners>

          <add name="xml"/>

        </listeners>

      </source>

    </sources>

    <sharedListeners>

      <add name="xml"

           type="System.Diagnostics.XmlWriterTraceListener"

                 initializeData="C:\logs\SRV_Traces.svclog" />

    </sharedListeners>

</system.diagnostics>

==============

Part2 added into the  section: <system.serviceModel>

<diagnostics wmiProviderEnabled="true">

      <messageLogging

           logEntireMessage="true"

           logMalformedMessages="true"

           logMessagesAtServiceLevel="true"

           logMessagesAtTransportLevel="true"

           maxMessagesToLog="3000"

       />

</diagnostics>

2. Perform an IISRestart and test connecting to the Orchestration console to generate the error.

3. Stop the IIS Site and view the resulting log file.

4. Opening the log file using: SvcTraceViewer.exe make it much easier to parse.

You can get it either by installing (a non-express version of) Visual Studio, or by installing the (free) Windows SDKs

(http://www.microsoft.com/downloads/details.aspx?FamilyID=E6E1C3DF-A74F-4207-8586-711EBE331CDC&displaylang=en)

5. Drilling into the XML data for the "Handling an Exception" entry and locating the inner exception we found the following:

System.Data.SqlClient.SqlException: The EXECUTE permission was denied on the object ‘GetSecurityToken’, database ‘<SCO DB Name>’, schema ‘Microsoft.SystemCenter.Orchestrator’.

It appears that in uninstalling/reinstalling the Web Console the needed permissions were not updated in SQL.

6. To address this issue we ran a SQL script that was contained in the following MSI

%localappdata%\Microsoft System Center 2012\Orchestrator\Microsoft.SystemCenter.Orchestrator.ManagementServer.msi"

From the *.SQL located the file:  Microsoft.SystemCenter.Orchestrator.Roles.SQL

Using the text from this file, we created a new query to run  against the Orchestrator database to reapply the permission grant operations.

The key was step 6.  The security evidently didn’t get setup correctly on the update, and needed to be fixed manually.

Error (415) adding a host to SCVMM 2012 sp1

I kept having errors adding hosts to a VMM server, even though all of the prereqs were met.  

I received the following errors every time I tried to add the hosts:

Error (415)
Agent installation failed copying C:\Program Files\Microsoft System Center 2012\Virtual Machine Manager\agents\I386\3.1.6011.0\msiInstaller.exe to \\<hostname>\ADMIN$\msiInstaller.exe.
The specified network name is no longer available

Recommended Action
1. Ensure <Hostname.FQDN> is online and not blocked by a firewall.
2. Ensure that file and printer sharing is enabled on <Hostname.FQDN> and it not blocked by a firewall.
3. Ensure that there is sufficient free space on the system volume.
4. Verify that the ADMIN$ share on <Hostname.FQDN>exists. If the ADMIN$ share does not exist, reboot <Hostname.FQDN> and then try the operation again.

Warning (10444)
The VMM management server was unable to impersonate the supplied credentials.

Recommended Action
To add a host in a disjointed domain namespace, ensure that the credentials are valid and of a domain account. In addition, the SCVMMService must run as the local system account or a domain account with sufficient privileges to be able to impersonate other users.

This took me much longer than the 5 minutes it should have taken to figure out. 

Basically, we have two links to the remote hosts.  Traffic to that remote site is routed differently depending on the which subnet it is on.  Also, we have a VLAN that is specifically set for switch management.  Once I moved the VMM server to a VLAN that was NOT restricted, the hosts added just fine.

If that isn’t your issue, but you get the Error (415) above, there is a knowledge base article that says you may have to enable the fileserver role first on a 2012 host.

Using SCOJobRunner

We have started using System Center Orchestrator (2012 SP1) to do some automation.  Most of what we have done so far could be done outside of Orchestrator pretty easily.  Having it in Orchestrator makes it easier to keep track of all the automated tasks that we have.  ( A central repository in theory.)

I have had a few different issues so far with the way that Orchestrator works.  It seems there is a common issue of Runbooks not showing up in the web console.  This isn’t hard to correct is seems, but it is annoying that they don’t automatically show up.

The way to get them to show up seems to be to clear the AuthorizationCache:

Hi, by default the orchestrator console refresh every 10 minutes. You could try update your AuthorizationCache, that is done by default every 10 minutes. If you run

TRUNCATE TABLE [Microsoft.SystemCenter.Orchestrator.Internal].AuthorizationCache in the Orchestrator database, do they show up direct then? Make sure you have a DB backup Before you do anything in the database.

http://social.technet.microsoft.com/Forums/sv/scogeneral/thread/3a4f49f1-b282-465c-84aa-e84335c4a7f9

Once they show up in the web console, you can use SCOJobRunner to call the Runbook.  That utility can be found here: http://blogs.technet.com/b/orchestrator/archive/2012/05/15/cool-tool-new-command-line-utility-to-start-a-runbook.aspx

Once you have that, you can use Task Scheduler to call the Runbook with SCOJobRunner.  The one thing that is kind of un-obvious is finding the ID.  There are a couple of ways, but here is a simple one:

An easy trick to getting the runbook ID is to go to the Orchestrator web console and click on the runbook itself in the left hand pane.  Within the URL you will find the runbook ID.

Example: http://server:82/#/RunbooksPage$FolderId=cdafbfdc-363f-49c4-81a0-62a18236a5ce&RunbookId=e46304a1-f900-4665-b0bc-ea0ad6c9f86e&RunbookInstanceId=&TabId=1&Filter

Vaughn

http://social.technet.microsoft.com/Forums/ko/scogeneral/thread/24c13d8c-b6d6-45c5-87c3-a68801a9005b

SCVMM and P2V Adventures

Where I work, we have been using Microsoft Virtualization since Virtual Server was in Beta.  Of course, we don’t necessarily use all of the functions and features of all the software we have, but one feature that I have used a good bit is the “Convert physical server” action in System Center Virtual Machine Manager.  Until recently, I have used this with great success.  We run IBM xSeries servers and I have converted something like 50 of them to virtual machines running on Hyper-V over the past several years. 

In late 2007, we bought our first IBM Blade Center (which I am very happy with) and with that move we also decided to do “boot from SAN” for all of our blades.  Just seemed to make sense that we wouldn’t put moving parts in a device that was designed to run so well without moving parts. 

At the time, we were implementing a new ERP system and several “hanger on” type applications, and Hyper-V (virtualization in general) wasn’t something that was supported by a lot of the software we were deploying.  So we have a lot of powerful blade servers, running a lot of low use applications.  I have managed to eradicate several of those wasteful installations, but there are a set that I am only now getting buy-in to virtualize. 

And today’s adventure begins with a Windows Server 2003 SP2 machine installed Boot from SAN on an IBM HS21-XM Blade server.

First attempt:

1.  Convert physical server

2.  Virtual machine name

3.  Scan System

image

Looks good..

4. Conversion options

image

we can try the defaults..

5.  Specify the processor and memory… 

6.  Select the host, path, network, start options, etc..

7.  The job starts, the machine gets copied over, and …

That try resulted in a blue screen loop.. 

image

Ok… time to try the Offline conversion:

1. Proceed as above but select the Offline conversion option at step 4.

2.  hmm..  conversion warnings… must correct to proceed..

Warning (13246)
No compatible drivers were identified for the device: Broadcom BCM5708S NetXtreme II GigE (NDIS VBD Client). The offline physical-to-virtual conversion requires a driver for this device.

Device Type: network adapter
Device Description: Broadcom BCM5708S NetXtreme II GigE (NDIS VBD Client)
Device Manufacturer: Broadcom Corporation
Hardware IDs (listed in order of preference):
B06BDRV\L2ND&PCI_16AC14E4&SUBSYS_03271014&REV_12

Compatible IDs (listed in order of preference):
B06BDRV\L2ND&PCI_16AC14E4&SUBSYS_03271014
B06BDRV\L2ND&PCI_16AC14E4
B06BDRV\L2ND

Recommended Action
Create a new folder under C:\Program Files\Microsoft System Center Virtual Machine Manager 2008 R2\Driver Import on the Virtual Machine Manager server and then copy the necessary 32-bit Windows Vista driver package files for this device to the new folder. The driver package files include the driver (.sys) and installation (.inf and .cat) files. Check the device manufacturer’s website for the necessary drivers.

We don’t really need to do that right…

Had some trouble with that part…  finally figured out that the drivers that need to be placed in that folder are the “RIS” drivers. 

Try number 3 (or 30, I lost count)…

1. Proceed as try number 2, ignore warning because we did put the driver in there, and

Blue screen loop…

Hmm… maybe this is just not meant to be.  Did some more searching and found this article:

http://blogs.msdn.com/b/robertvi/archive/2009/10/07/after-installing-hyper-v-integration-services-on-the-next-reboot-the-vm-displays-bsod-0x0000007b.aspx 

Basically, there are some people seeing the exact same blue screen that I was seeing, except this was after the install of updated integration components.  But I wasn’t installing integration components yet… or was I?

image

Ok so maybe it was getting that far and just “blowing up” after the install of the components.  Good thing about this being a P2V, I can go back to the source machine pretty easy and check the registry:

image

Looks like we may have an answer here.  Change the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wdf01000\Group entry to be WdfLoadGroup instead of base. 

It is my guess, that this would have worked even with the online conversion option.

Influencers Blog

So the System Center guys have provided a place for people who work with System Center products to see a conglomeration of posts from various professionals who have registered to Blog about System Center products.  How fun…

Blog Posts by System Center Influencers


Get the feed.

Below are the most recent posts from several of the members of the System Center Influencers Program. Note that Microsoft does not review the content or endorse it in any way; we present this content in a feed form for your information and convenience. (In the event that the feed refuses to render due to the flakiness of the third-party feed service, simply use the feed embedded in the RSS icon above.)

Nexus SC: The System Center Team Blog : Blog Posts by System Center Influencers

Error installing DPM 2010 Beta

I was installing the DPM 2010 Beta (finally) and had an issue trying to get the SQL 2008 to install.  Finally figured out that I had the install files stored too deeply in a network share.  I figure this out by running the SQL install directly and when it when to check prereq’s it had an error on one section and when you click for more info this is what you get:

Rule "Long path names to files on SQL Server installation media" failed.

SQL Server installation media on a network share or in a custom folder can cause installation failure if the total length of the path exceeds 260 characters. To correct this issue, utilize Net Use functionality or shorten the path name to the SQL Server setup.exe file.

So, I moved it to a shorter path and it installed just fine.

Data Protection Manager 2010

So I am a bit late realizing this, but the Beta for DPM 2010 is available now on the Connect site.  I haven’t read anything on it yet, so mainly I am just posting this to make myself look into it.

https://connect.microsoft.com/Downloads/DownloadDetails.aspx?SiteID=840&DownloadID=22070

Follow up to the DPM recovery point expiration issues

Previously, I blogged about issues I was having where old recovery points were not being expired/removed from my DPM servers.  I had to open a ticket with Microsoft, and worked with them to determine the cause, and since then, they have released a fix.

The fix that Microsoft developed is here: http://www.microsoft.com/downloads/details.aspx?FamilyID=aee949aa-d3e7-4b0f-b718-00b7c20f1257&displayLang=en

A few people have asked for the PowerShell script “show-pruneshadowcopies.ps1” that Microsoft provided and I mentioned in my previous post (here).  The script looks like this:

#displays all RP for data sources and shows which RP’s would be deleted by the regular pruneshadowcopies.ps1
# Outputs to a logfile:  C:\Program Files\Microsoft DPM\DPM\bin\SHOW-PRUNESHADOWCOPIES.LOG

#Author    : Mike J
#Date    : 02/24/2009
$version="V1.0"

$date=get-date
$logfile="SHOW-PRUNESHADOWCOPIES.LOG.txt"

function GetDistinctDays([Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.ProtectionGroup] $group,
[Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.Datasource] $ds)
{   
    if($group.ProtectionType -eq [Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.OMCommon.ProtectionType]::DiskToTape)
    {
        return 0
    }
    $scheduleList = get-policyschedule -ProtectionGroup $group -ShortTerm
    if($ds -is [Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.FileSystem.FsDataSource])
    {
        $jobType = [Microsoft.Internal.EnterpriseStorage.Dls.Intent.JobTypeType]::ShadowCopy
    }
    else
    {
        $jobType = [Microsoft.Internal.EnterpriseStorage.Dls.Intent.JobTypeType]::FullReplicationForApplication
        if($ds.ProtectionType -eq [Microsoft.Internal.EnterpriseStorage.Dls.Intent.ReplicaProtectionType]::ProtectFromDPM)
        {           
            return 2
        }
    }
    write-host   "Look for jobType $jobType"

    foreach($schedule in $scheduleList)
    {
        write-host("schedule jobType {0}" -f $schedule.JobType)
        if($schedule.JobType -eq $jobType)
        {
            return [Math]::Ceiling(($schedule.WeekDays.Length * $ds.RecoveryRangeinDays) / 7)
        }
    }

    return 0
}

function IsShadowCopyExternal($id)
{
    $result = $false;

    $ctx = New-Object -Typename Microsoft.Internal.EnterpriseStorage.Dls.DB.SqlContext
    $ctx.Open()

    $cmd = $ctx.CreateCommand()
    $cmd.CommandText = "select COUNT(*) from tbl_RM_ShadowCopy where shadowcopyid = ‘$id’"  
    write-host $cmd.CommandText
    $countObj = $cmd.ExecuteScalar()
    write-host $countObj
    if ($countObj -eq 0)
    {
        $result = $true
    }
    $cmd.Dispose()
    $ctx.Close()

    return $result
}

function IsShadowCopyInUse($id)
{
    $result = $true;

    $ctx = New-Object -Typename Microsoft.Internal.EnterpriseStorage.Dls.DB.SqlContext
    $ctx.Open()

    $cmd = $ctx.CreateCommand()
    $cmd.CommandText = "select ArchiveTaskId, RecoveryJobId from tbl_RM_ShadowCopy where ShadowCopyId = ‘$id’"  
    write-host $cmd.CommandText
    $reader = $cmd.ExecuteReader()
    while($reader.Read())
    {
        if ($reader.IsDBNull(0) -and $reader.IsDBNull(1))
        {
            $result = $false
        }
    }
    $cmd.Dispose()
    $ctx.Close()

    return $result
}

"**********************************" > $logfile
"Version $version" >> $logfile
get-date >> $logfile

$dpmservername = &"hostname"

$dpmsrv = connect-dpmserver $dpmservername

if (!$dpmsrv)
{
    write-host "Unable to connect to $dpmservername"
    exit 1
}

write-host $dpmservername
"Selected DPM server = $DPMservername" >> $logfile
$pgList = get-protectiongroup $dpmservername
if (!$pgList)
{
    write-host   "No PGs found"
    disconnect-dpmserver $dpmservername
    exit 2
}

write-host("Number of ProtectionGroups = {0}" -f $pgList.Length)
$replicaList = @{}
$latestScDateList = @{}

foreach($pg in $pgList)
{
    $dslist = get-datasource $pg
    if ($dslist.length -gt 0)
    {
    write-host("Number of datasources in this PG = {0}" -f $dslist.length)
    ("Number of datasources in this PG = {0}" -f $dslist.length) >> $logfile
     }
    Foreach ($ds in $dslist)
    {
       write-host("DS NAME=  $ds")
       ("DS NAME=  $ds") >>$logfile
    }
    foreach ($ds in $dslist)
    {       
        $rplist = get-recoverypoint $ds | where { $_.DataLocation -eq ‘Disk’ }
        write-host("Number of recovery points for $ds {0}" -f $rplist.length)
        ("Number of recovery points for $ds {0}" -f $rplist.length) >>$logfile 
        $countDistinctDays = GetDistinctDays $pg $ds
        write-host("Number of days with fulls = $countDistinctDays")
        ("Number of days with fulls = $countDistinctDays") >>$logfile
        if($countDistinctDays -eq 0)
        {
            write-host   "D2T PG. No recovery points to delete"
            "D2T PG. No recovery points to delete" >>$logfile
            continue;
        }
        $replicaList[$ds.ReplicaPath] = $ds.RecoveryRangeinDays
        $latestScDateList[$ds.ReplicaPath] = new-object DateTime 0,0
        $lastDayOfRetentionRange = ([DateTime]::UtcNow).AddDays($ds.RecoveryRangeinDays * -1);       
        write-host("Distinct days to count = {0}. LastDayOfRetentionRange = {1} " -f $countDistinctDays, $lastDayOfRetentionRange)
        ("Distinct days to count = {0}. LastDayOfRetentionRange = {1} " -f $countDistinctDays, $lastDayOfRetentionRange) >>$logfile
        $distinctDays = 0;
        $lastDistinctDay = (get-Date).Date
        $numberOfRecoveryPointsDeleted = 0

        if ($rplist)
        {
            foreach ($rp in ($rplist | sort-object -property UtcRepresentedPointInTime -descending))
            {                       
                if ($rp)
                {                   
                    if ($rp.UtcRepresentedPointInTime.Date -lt $lastDistinctDay)
                    {
                        $distinctDays += 1
                        $lastDistinctDay = $rp.UtcRepresentedPointInTime.Date
                    }
                    write-host(" $ds")
                    (" $ds") >>$logfile
                    write-host("  Recovery Point #$distinctdays RPtime={0}" -f $rp.UtcRepresentedPointInTime)
                    ("  Recovery Point #$distinctdays RPtime={0}" -f $rp.UtcRepresentedPointInTime) >>$logfile
                    if (($distinctDays -gt $countDistinctDays) -and ($rp.UtcRepresentedPointInTime -lt $lastDayOfRetentionRange))
                    {
                        write-host ("Recovery Point would be deleted ! – RPtime={0}" -f $rp.UtcRepresentedPointInTime)  -foregroundcolor red
                        ("Recovery Point would be deleted ! – RPtime={0} <<<<<<<" -f $rp.UtcRepresentedPointInTime) >>$logfile
#remove-recoverypoint $rp -ForceDeletion -confirm:$true | out-null
                        $numberOfRecoveryPointsDeleted += 1
                    }
                    else
                    {
                        write-host "    Recovery point not expired yet"
                        "    Recovery point not yet expired" >>$logfile
                    }
                }
                else
                {
                    write-host "Got a NULL rp"
                    "Got a NULL rp" >>$logfile
                }   
            }

            write-host "Number of RPs that would be deleted = $numberOfRecoveryPointsDeleted"  
            "Number of RPs that would be deleted = $numberOfRecoveryPointsDeleted" >>$logfile            
        }
    }
}

disconnect-dpmserver $dpmservername
write-host "Exiting from script"

exit

DPM v 3

I just watched a webcast on DPM v3 and thought I would share some of what I got from that.

In the last 18 months, DPM 2007 (v2) delivered application protection for Exchange ,SQL Server, SharePoint and virtualization environments running Virtual Server and Hyper-V.  Disaster recovery with Iron Mountain, Local Datasource Protection and Client backups have also come out through DPM 2007, its first feature update and Service Pack 1.  Now it is time to show what is coming next for DPM. 

A few top line items are support for the following:

  • support for Exchange 14, and more granular restore
  • protect the entire SQL instance, and auto discover new DB’s
  • protect 1000’s of DBs per DPM server
  • End User Recovery by the SQL Admin (role based access from the DPM console)
  • Office 14
  • AD appears as a data source in DPM UI
  • Image restore from centrally managed DPM server – executed locally
  • Support for Windows guest on VMware hosts
  • SAP running on MS SQL

and some other improvements:

  • up to 100 servers, 1000 laptops, 2000 databases per DPM server
  • management pack updates
  • automatic re-running of jobs and improved self-healing  —  This is a huge one in my book
  • auto protect new sources for SQL and MOSS
  • improved scheduling capabilities
  • one click DPM DR failover and failback
  • continued support for SAN (scripts/whitepapers)

platform requirements:

  • DPM Server must be 64-bit Windows Server 2008 R2
  • Integration capability with Windows EBS 2008 R2