Express Route Provisioning Error

 

We have recently decided to invest in an Express Route circuit for Azure.  It is supposed to be helpful with Azure and Office 365.  There are two ways to provision the ExpressRoute circuit.  Both require PowerShell.

There is the classic: https://azure.microsoft.com/en-us/documentation/articles/expressroute-howto-circuit-classic/

And there is the Resource Manager: https://azure.microsoft.com/en-us/documentation/articles/expressroute-howto-circuit-arm/

Here is the note about those options:

Resource Manager: This is the newest deployment model for Azure resources. Most newer resources already support this deployment model and eventually all resources will.

Classic: This model is supported by most existing Azure resources today. New resources added to Azure will not support this model.

This seems to indicate that using Resource Manager is the right way to go long term.

The problem (for me currently) is the documentation isn’t quite where I think it should be.  If you try to run the commands to setup Express Route and ask for detailed help, you get little if any helpful information. 

One item that kind of bothers me.  When you request the service provider information using the “Get-AzureRmExpressRouteServiceProvider” command, the results are not as informative as they need to be.  I say this because the results look like this:

Name              : Verizon
Id                : /subscriptions//resourceGroups//providers/Microsoft.Network/expressRouteServiceProviders/
ProvisioningState :
Type              : Microsoft.Network/expressRouteServiceProviders
PeeringLocations  : null
BandwidthsOffered : null

Name              : Vodafone
Id                : /subscriptions//resourceGroups//providers/Microsoft.Network/expressRouteServiceProviders/
ProvisioningState :
Type              : Microsoft.Network/expressRouteServiceProviders
PeeringLocations  : null
BandwidthsOffered : null

Name              : Zayo Group
Id                : /subscriptions//resourceGroups//providers/Microsoft.Network/expressRouteServiceProviders/
ProvisioningState :
Type              : Microsoft.Network/expressRouteServiceProviders
PeeringLocations  : null
BandwidthsOffered : null

From that you are supposed to then run a command (per the documentation) that looks like this:

New-AzureRmExpressRouteCircuit -Name "ExpressRouteARMCircuit" -ResourceGroupName "ExpressRouteResourceGroup" -Location "West US" -SkuTier Standard -SkuFamily MeteredData -ServiceProviderName "Equinix" -PeeringLocation "Silicon Valley" -BandwidthInMbps 200

The problem is the previous results don’t give you the PeeringLocation.  All of them come back as “null”.  I looked at the sample output from the “Classic” process and picked the location that seemed to make the most sense.  The command finished so I assume that it worked correctly. 

StorSimple Virtual Appliance

I have been interested in StorSimple for some time, but haven’t actually used it before.  Recently, Microsoft announced a virtual appliance preview.  It looks pretty interesting and I have a test scenario where I want to use it.  If my testing goes well, I may expand my use of it. 

The documentation on it is pretty intimidating, since there are 15 PDFs to help you get started.

It is supported running on VMware and Hyper-V, and use cases are:

  • File Server – User file shares or Department file shares
  • iSCSI Server – Small SQL databases or User home folders

My initial test will be as a small departmental file share.

2015 Blog Report

Well, I didn’t hit the number I was hoping to hit.  I made it to 162, and I was hoping to make it to 175.  Maybe by the end of 2016?

Since this is a WordPress site and I use Jetpack here is a nice little report provided with no effort on my part: http://jetpack.me/annual-report/6842850/2015/

Shrinking volumes

Sometimes, I find it useful to shrink volumes.  This happens about once every 2 or 3 years.  Since I do it so infrequently, I have to look it up every time.

In my experience, while you can do it from the GUI, it isn’t always successful.  Also, I have not ever seen it work to try and shrink it by the complete amount that is available to shrink.  My experiences are related to very large (larger than 1 TB) volumes.

To shrink via the command line, at an elevated prompt, do the following:

Diskpart – this is the disk management CLI

List volumes – this is the diskpart cmd that gives you a list of volumes (not to be confused with the list of disks

select volume <#> – this is how you select the volume that you want to work on, i.e. “select volume 2

shrink querymax – this tells you how much space can be trimmed off the volume.  There are several factors that affect this, but the primary things are how big the volume is to begin with, and where on that volume the un-movable system files  are located.

shrink desired=<size in MB> – this tells the volume to shrink by the amount of space given in MB.  I.e. “shrink desired=102400” will shrink the volume by 100GB.

shrink minimum=<size in MB> – this tells the volume to shrink by the amount specified, but only if it can shrink by that amount

You can use the minimum and desired together if you want.  You can also add a “NOWAIT” so that the prompt returns and you don’t have to wait to see the results.

Don’t instal build 10547

I have been very happy with the Windows 10 builds.  Most of them have at least been “no visible change” for me.  Build 10547 however, was not like that.

I had heard of some issues with that build from one of my co-workers, but nothing specific enough to not make think it was other than an anomaly.  However, when I installed the build, I was very unhappy. 

First, I rarely shut my machine down.  I had to do a hard shutdown 3 times on my machine after installing Build 10547, because the screen was black and only the mouse pointer was visible.   After I was able to get into the machine, the network adapter wasn’t working.  It has worked fine for Windows 7, 8, 8.1 and every other build of Windows 10.  I kept getting activation errors for Office programs.  after 2 hours, I finally had to revert to the previous build. 

Build 10550 is supposed to have addressed some of these issues.  Cross your fingers.

Blog Status

When I got back on the Blog wagon, I intended to write more posts, and even came up with a list of topics that I wanted to write about.  So far I am not doing so well on that.  The last couple of posts that I wrote had to do with SharePoint, which wasn’t even on my list.  There has been a little bit of work on the topics I intended, but I haven’t done as much as I would like.  And I have definitely not done as much blogging as I intended (hoped) by this point.

A status update on my post count…  I am up to 159 (160 when this posts).  I get a few hits here and there on my random PowerShell contributions.  I still get a lot of hits on my DPM posts.  I don’t actually use DPM anymore (we use a third party hosted solution, not because there was any problem with DPM).  I get hits on some of my Hyper-V posts and a few other storage posts. 

My list included improving my management skills.  Maybe I should blog a little more about my experiences in that regard.

Security Token Service

Had a bit of a scare during a maintenance window.  Ran some updates on our SharePoint farm and after that one of the sites wasn’t coming up.  Kept getting a 503 error.  When I checked the event log, I found this error message:

An exception occurred when trying to issue security token: The HTTP service located at http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc/actas is unavailable.  This could be because the service is too busy or because no endpoint was found listening at the specified address. Please ensure that the address is correct and try accessing the service again later..

A quick search led to this article, and when I checked the AppPools, they were all stopped.

https://social.technet.microsoft.com/Forums/sharepoint/en-US/1bb454e8-d395-4059-8bc8-ccc74f999659/the-security-token-service-is-not-issuing-tokens-the-service-could-be-malfunctioning-or-in-a-bad?forum=sharepointgeneralprevious

Starting the AppPools fixed the problem.

SharePoint Documents are all checked out

I wanted to write a PowerShell script to check all the documents back in that had been checked out.  Turns out that is not as straight forward as I had hoped.

First search came up with this: Office Discarding Check Out Using PowerShell

Then when I tried to run the command (from the server console, because it doesn’t appear that you can just install the SP PowerShell Module on your desktop), I received the error below:

Get-SPWeb : Cannot access the local farm. Verify that the local farm is properly configured, currently available, and that you have the appropriate permissions to access the database before trying again.

That let me to this site: http://www.sharepointassist.com/2010/01/29/the-local-farm-is-not-accessible-cmdlets-with-featuredependencyid-are-not-registered/comment-page-1/#comment-1566

and this answer:

run sharepoint management shell with the service account
$db = Get-SPDatabase | Where {$_.Name -eq “SharePoint_ConfigDB”}
Add-SPShellAdmin “domain\user_to_add” -database $db

So I looked figured out how to accomplish that and moved on to writing the script for checking in files.  I ran out of time to work on this.  It will accomplish the task, but it isn’t as clean and efficient as I would like.

# Put your site root here
$TargetSite = "https://your.site.name/blah/blah/blah"
# The root folder that you want to start with
$TargetLibrary = "Shared Documents"

 

function LookForWorkFiles ($workFolder){
    Write-Host $workFolder
    # Get the SubFolder and work through it
    $wf = $Site.GetFolder($workFolder)
    # Get the files
    $FilesToProcess = $wf.Files
   
    Write-Host "How many files you ask?"
    Write-Host $FilesToProcess.Count
    If ($FilesToProcess.Count -le 0) {
        # If there aren’t any files, move on to the SubFolders
        Write-Host "No Files in $workFolder, checking for subfolders"
        }
    Else
        {
        # Check in all the files -NOTE this will cause an error for any
        # file that isn’t checked out. 
        foreach ($_ in $FilesToProcess) {
            Write-Host $_.Name
            $_.UndoCheckOut()
            $_.Update()
            }
        }
    Write-Host "Looking for Subfolders"
    Write-Host $wf.SubFolders
    foreach ($_ in $wf.SubFolders){
        Write-Host "SubFolders?"
        $Site.GetFolder($_)
        LookForWorkFiles $_
        }
    }
   
   
Write-Host "Beginning Script"
Write-Host $TargetSite
Write-Host $TargetLibrary

#Connect to the site
$Site = Get-SPWeb $TargetSite
#Get the Document Folder
$Folder = $Site.GetFolder($TargetLibrary)

foreach ($SPF in $Folder.SubFolders ){
    If ($SPF.Name -eq "Forms"){
        #the forms directory is for SharePoint, not for file management
        Write-Host "Skipping the Forms Directory"
        }
    Else
        {
        Write-Host $SPF.Name
        LookForWorkFiles $SPF
        }
    }

With this script, it is easier to use file management tools to move the files out of SharePoint. 

SharePoint Recycle Bin

I mentioned earlier (here) that we are clearing some old data out of a SharePoint site.  Aside from the various client side issues with trying to do that, you also have to figure out how to clean up SharePoint.  SharePoint has a multi-tier Recycle Bin.  There is a pretty good article about how it works in SharePoint 2010 (How the Recycle Bin Works in SharePoint) and the concept is pretty much the same for SharePoint 2013.

If you go to Site Settings you see something similar to this:

image

If you go down to the “Site Collection Administration” and click on “Recycle Bin” it takes you to the “End user Recycle Bin items”. 

You can change the view to “Deleted from end user Recycle Bin” and see the second level Recycle Bin contents:

image

Once it is gone from there, you can think about shrinking the database.

Here is where I learned what I just said:

http://sharepoint.stackexchange.com/questions/34980/claim-sql-server-space-after-content-deleted-from-sharepoint

I was asked to reduce a content DB recently, heres what I found. Whilst the customer was great in deleting content from their sites, SharePoint has a 2 stage recycle bin, so whilst the items were deleted from the site level recycle bin there was still 30 days left on the site collection recycle bin. So please ensure you check that and flush it.

Yes you’ll find that the DB will still be large at this stage. So as described above, use SQL Management studio and locate your content database in question, it won’t be the SharePoint configuration database as the example shows above but rather WSS_Content_something in all likelihood.

After that you find you will be able to shrink the database file (mdf). BUT! beware chances are your database is set to full recovery mode and when the shrink operation takes place all it will do is blow the transaction log out as many GB as you have removed. So the choice here is you might want to switch the DB into simple mode and then perform the shrink operation> I was lucky in that I had sufficient disk space when the shrink operation was taking place and removing 40GB blew the logs out 40GB, once that was complete the mdf was then shrunk 40GB. After this I switched the DB to simple mode and shrank the log file, then changed it back to full.

One word of warning.  When you go delete that data, if it is a lot of data, SharePoint becomes a bit less responsive.  You may want to do that outside of normal working hours.

Moving data out of SharePoint

We have a SharePoint site that we used to store some data.  It was used at remote locations to be able to “map a drive” and interact with data.  There is some old data in the site now, and we needed to move it off.  I hadn’t messed with it in a while and forgot several of the little issues that come up when you use this solution.

First SharePoint has a file size limit, and an item count limit.  I believe the file size limit is 50MB by default, but I don’t remember for sure (SharePoint 2010 and 2013).  Also, from the client side there is a registry entry that you have to change in order to download files larger that 50 MB.  I found that answer here:

 http://answers.microsoft.com/en-us/ie/forum/ie8-windows_xp/error-0x800700df-the-file-size-exceeds-the-limit/d208bba6-920c-4639-bd45-f345f462934f

FileSizeLimitInBytes is set to 5000000 which limits your download so just set it to maximum! (this is client side btw on windows 7)

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters

  1. Right click on the FileSizeLimitInBytes and click Modify
  2. Click on Decimal
  3. In the Value data box, type 4294967295, and then click OK. Note this sets the maximum you can download from the Webdav to 4 gig at one time, I haven’t figured out how to make it unlimited so if you want to download more you need to split it up.

hopefully this helps for you guys!

The other issue is that PDFs tend to get checked out and stay checked out.  So when you are trying to move the files, you have to make sure they are all checked back in first.