Archive for February, 2021

Content databases store all content for a site collection. This includes site documents or files in document libraries, list data, Web Part properties, audit logs, and sandboxed solutions, in addition to user names and rights. All of the files that are stored for a specific site collection are located in one content database on only one server. A content database can be associated with more than one site collection.

Below are some of the basic tables within a content database and a very high level diagram on some of the relationships between them.

Features:

Table that holds information about all the activated features for each site collection or site.

Sites:

Table that holds information about all the site collections for this content database.

Webs:

Table that holds information about all the specific sites (webs) in each site collection.

UserInfo

Table that holds information about all the users for each site collection.

Groups:

Table that holds information about all the SharePoint groups in each site collection.

Roles:

Table that holds information about all the SharePoint roles (permission levels) for each site.

All Lists:

Table that holds information about lists for each site.

GroupMembership:

Table that holds information about all the SharePoint group members.

AllUserData:

Table that holds information about all the list items for each list.

AllDocs:

Table that holds information about all the documents (and all list items) for each document library and list.

RoleAssignment:

Table that holds information about all the users or SharePoint groups that are assigned to roles.

Sched Subscriptions: Table that holds information about all the scheduled subscriptions (alerts) for each user.

ImmedSubscriptions

Table that holds information about all the immediate subscriptions (alerts) for each user.

High Availability options in MSSQL Server

Posted: February 25, 2021 in MSSQL
  • Replication
  • Mirroring
  • Log Shipping
  • Clustering
  • AlwaysON

SQL Server Replication Overview

At a high level, replication involves a publisher and subscriber, where the publisher is the primary server and the subscriber is the target server. Replication’s main purpose is to copy and distribute data from one database to another. There are four types of replication that we will outline:

  • Snapshot replication
  • Transactional replication
  • Merge replication
  • Peer to Peer replication

Snapshot: Snapshot replication occurs when a snapshot is taken of the entire database and that snapshot is copied over to the subscriber. This is best used for data that has minimal changes and is used as an initial data set in some circumstances to start subsequent replication processes.

Transactional: Transactional replication begins with a snapshot of the primary database that is applied to the subscriber. Once the snapshot is in place all transactions that occur on the publisher will be propagated to the subscriber. This option provides the lowest latency.

Merge: Merge replication begins with a snapshot of the primary database that is applied to the subscriber. Changes made at the publisher and subscriber are tracked while offline. Once the publisher and subscriber are back online simultaneously, the subscriber synchronizes with the publisher and vice versa. This option could be best for employees with laptops that leave the office and need to sync their data when they are back in the office.

Peer to Peer: Peer to Peer replication can help scale out an application.  This is because as transactions occur they are executed on all of the nodes involved in replication in order to keep the data in sync in near real time.

Pros and Cons for SQL Server Replication
ProsCons
Can replicate to multiple serversManual failover
Can access all databases being replicatedSnapshot can be time consuming if you have a VLDB
Replication can occur in both directionsData can get out of sync and will need to re-sync

SQL Server Database Mirroring Overview

Database Mirroring involves a principal server that includes the principal database and a mirror server that includes the mirrored database. The mirror database is restored from the principal with no recovery leaving the database inaccessible to the end users. Once mirroring is enabled, all new transactions from the principal will be copied to the mirror. The use of a witness server is also an option when using the high safety with automatic failover option. The witness server will enable the mirror server to act as a hot standby server. Failover with this option usually only takes seconds to complete. If the principal server was to go down the mirror server would automatically become the principal.

Pros and Cons for Mirroring
ProsCons
Automatic failover (with witness server)Limited to two servers
Fairly easy to setupMirrored database is set to restore mode (Can’t access)
Fast failoverRumored to be replaced by AlwaysOn in SQL Server 2012

SQL Server Log Shipping Overview

Log shipping involves one primary server, one monitor server (optional), and can involve multiple secondary servers. The secondary database(s) is restored from the primary database with no recovery leaving the database inaccessible to end users. The process of log shipping begins with the primary server taking a transaction log backup and moving the transaction log to a backup share on the secondary server by using the SQL Server Agent and job schedules at a set time interval. The secondary server will then restore the transaction log using the SQL Server Agent and job schedules at a set time interval. While it’s nice that log shipping supports multiple secondary servers, it’s probably the least used for HA because before the failover can occur, the secondary database must be brought fully up to date by manually applying unrestored log backups.

Pros and Cons for Log Shipping
ProsCons
Can log ship to multiple serversFailover is only as good as your last log backup
Secondary database will be read only for reportingManual failover
Does not require SQL Server Enterprise 

SQL Server Clustering Overview

Clustering involves at least two servers and is more of a server level high availability option compared to a database level option.Clustering will allow one physical server to take over the responsibilities of another physical server that has failed. This is crucial in environments that need close to 100% uptime. When a server’s resources fail, the other server will automatically pick up where the failed server left off causing little or no downtime. The two types of clustering we will discuss are Active/Active and Active/Passive.

Active/Active: When running in Active/Active mode, SQL Server is actually running on both servers.  If one of the SQL Server’s fail then the other SQL Server will failover meaning that two instances will be running on one server which could potentially cause performance issues if not sized appropriately.

Active/Passive: When running in Active/Passive mode, SQL Server runs on one server while the other server waits in case of a failure. This is the most popular choice because it doesn’t affect performance; however, you will need a server just sitting there with nothing running on it which could be perceived as expensive.

Pros and Cons for Clustering
ProsCons
Can cluster multiple serversComplex setup
Automatic failoverRisk of purchasing hardware that never gets used
Server level failover compared to DB levelNot necessarily data protection

SQL Server AlwaysON Overview

AlwaysON is a new feature shipping with SQL Server 2012 and is an alternative to database mirroring. AlwaysON uses groups called Availability Groups, which are groups that contain selected databases that will fail over together if a failure should occur. Since AlwaysOn is such a new feature there is not a lot of production environment usage yet. I have installed and configured this option on a few test servers, however, and think it’s by the far the coolest HA option to date.

Ref: https://www.mssqltips.com/sqlservertip/2482/sql-server-high-availability-options/

Few important SQL queries

Posted: February 24, 2021 in MSSQL

List all databases which are offline:

SELECT
‘DB_NAME’ = db.name,
‘FILE_NAME’ = mf.name,
‘FILE_TYPE’ = mf.type_desc,
‘FILE_PATH’ = mf.physical_name
FROM
sys.databases db
INNER JOIN sys.master_files mf
ON db.database_id = mf.database_id
WHERE
db.state = 6 — OFFLINE

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SELECT
m.physical_name + ‘\’ + m.name AS [file_path]
FROM
sys.databases AS d
INNER JOIN sys.master_files AS m ON d.database_id = m.database_id
WHERE
d.state_desc = ‘OFFLINE’
–AND m.type_desc = ‘ROWS’
GROUP BY
m.physical_name + ‘\’ + m.name

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

select * from sys.databases where state_desc=’OFFLINE’

=========================================================================

List all Physical and Logical names of databases and their paths:

SELECT d.name DatabaseName, f.name LogicalName,
f.physical_name AS PhysicalName,
f.type_desc TypeofFile
FROM sys.master_files f
INNER JOIN sys.databases d ON d.database_id = f.database_id
GO

=========================================================================

Get a list of database files with size for all databases in SQL Server:

SELECT DB_NAME(database_id) AS database_name,
type_desc,
name AS FileName,
size/128.0 AS CurrentSizeMB
FROM sys.master_files
WHERE database_id > 6 AND type IN (0,1)

=========================================================================

6 Ways to Check the Size of a Database in SQL Server using T-SQL

The sp_spaceused Stored Procedure

This is a system stored procedure that displays the number of rows, disk space reserved, and disk space used by a table, indexed view, or Service Broker queue in the current database, or displays the disk space reserved and used by the whole database.

To use it, simply switch to the relevant database and execute the procedure. Like this:

USE WideWorldImporters;
EXEC sp_spaceused;

Result:

database_name       database_size  unallocated space
------------------  -------------  -----------------
WideWorldImporters  3172.00 MB     2511.76 MB       

1 row(s) returned

reserved   data       index_size  unused 
---------  ---------  ----------  -------
573688 KB  461728 KB  104120 KB   7840 KB

1 row(s) returned

This returns two result sets that provide the relevant information.

You can also provide an object name to return data on a specific object within the database. In this case, only one result set will be returned.

Example:

USE WideWorldImporters;
EXEC sp_spaceused N'Application.Cities';

Result:

name    rows                  reserved  data     index_size  unused
------  --------------------  --------  -------  ----------  ------
Cities  37940                 4880 KB   3960 KB  896 KB      24 KB

In this example we return information about the Cities table only.

The sp_helpdb Stored Procedure

Another system stored procedure is sp_helpdb.

Here’s an example of calling that:

EXEC sp_helpdb N'WideWorldImporters';

Result:

name          fileid  filename          filegroup  size        maxsize        growth    usage    
------------  ------  ----------------  ---------  ----------  -------------  --------  ---------
WWI_Primary   1       /data/WWI.mdf     PRIMARY    1048576 KB  Unlimited      65536 KB  data only
WWI_Log       2       /data/WWI.ldf     null       102400 KB   2147483648 KB  65536 KB  log only 
WWI_UserData  3       /data/WWI_UD.ndf  USERDATA   2097152 KB  Unlimited      65536 KB  data only

In this case, we pass the name of the database as an argument. We can also call sp_helpdb without providing an argument. If we do this, it will return information on all databases in the sys.databases catalog view.

The sp_databases Stored Procedure

Yet another option is the sp_databases system stored procedure. This stored procedure lists databases that either reside in an instance of the SQL Server or are accessible through a database gateway.

Here’s how to execute it:

EXEC sp_databases;

Result:

DATABASE_NAME       DATABASE_SIZE  REMARKS
------------------  -------------  -------
master              6848           null   
model               16384          null   
msdb                15616          null   
Music               16384          null   
Nature              16384          null   
Solutions           47104          null   
tempdb              16384          null   
Test                16384          null   
WideWorldImporters  3248128        null   
world               16384          null   
WorldData           16384          null

The sys.master_files View

The above stored procedure queries the sys.master_files view. So an alternative is to go straight to the view and cherry pick your columns:

SELECT
    name,
    size,
    size * 8/1024 'Size (MB)',
    max_size
FROM sys.master_files
WHERE DB_NAME(database_id) = 'WideWorldImporters';

Result:

name          size    Size (MB)  max_size 
------------  ------  ---------  ---------
WWI_Primary   131072  1024       -1       
WWI_Log       12800   100        268435456
WWI_UserData  262144  2048       -1       

In this case we can see the size of each data file and log file, as they’re listed separately. You’ll also notice that I perform a calculation on the size column to convert the value into megabytes (MB).

The sys.database_files View

There’s also a system view called sys.database_files. We can use this view to return the same info as the previous example:

USE WideWorldImporters;
SELECT
    name,
    size,
    size * 8/1024 'Size (MB)',
    max_size
FROM sys.database_files;

Result:

name          size    Size (MB)  max_size 
------------  ------  ---------  ---------
WWI_Primary   131072  1024       -1       
WWI_Log       12800   100        268435456
WWI_UserData  262144  2048       -1       

Use a Window Function

One potential issue with the previous two examples is that they list out the size of each file separately. This could be seen as a positive or a negative depending on what you want to achieve.

It could also be argued that the first three solutions on this page are problematic, because they only provide the sum total of all files – they don’t list out each individual file along with its size.

So what if you want to see both the size of each individual file, and the total of all files for each database?

You could use the OVER clause to do exactly that.

Here’s an example:

SELECT
    d.name AS 'Database',
    m.name AS 'File',
    m.size,
    m.size * 8/1024 'Size (MB)',
    SUM(m.size * 8/1024) OVER (PARTITION BY d.name) AS 'Database Total',
    m.max_size
FROM sys.master_files m
INNER JOIN sys.databases d ON
d.database_id = m.database_id;

Result:

Database            File             Size (MB)  Database Total
------------------  ---------------  ---------  --------------
master              master           4          6             
master              mastlog          2          6             
model               modeldev         8          16            
model               modellog         8          16            
msdb                MSDBData         14         14            
msdb                MSDBLog          0          14            
Music               Music            8          16            
Music               Music_log        8          16            
Nature              Nature           8          16            
Nature              Nature_log       8          16            
Solutions           Solutions        8          46            
Solutions           Solutions_log    8          46            
Solutions           Solutions_dat_2  10         46            
Solutions           Solutions_dat_3  10         46            
Solutions           Solutions_log_2  10         46            
tempdb              tempdev          8          16            
tempdb              templog          8          16            
WideWorldImporters  WWI_Primary      1024       3172          
WideWorldImporters  WWI_Log          100        3172          
WideWorldImporters  WWI_UserData     2048       3172          
world               world            8          16            
world               world_log        8          16       

This lists out each database, the files for each database, the file size for each file, as well as the total of all files for each database. This requires that each database (and their total size) is listed multiple times (once for each file).

Ref: https://database.guide/6-ways-to-check-the-size-of-a-database-in-sql-server-using-t-sql/#:~:text=If%20you’re%20using%20a,and%20then%20click%20Disk%20Usage).

Enable continuous crawls is a crawl schedule option that is an alternative to incremental crawls. This option is new in SharePoint Server and applies only to content sources of type SharePoint Sites.

Continuous crawls crawl SharePoint Server sites frequently to help keep search results fresh. Like incremental crawls, a continuous crawl crawls content that was added, changed, or deleted since the last crawl. Unlike an incremental crawl, which starts at a particular time and repeats regularly at specified times after that, a continuous crawl automatically starts at predefined time intervals. The default interval for continuous crawls is every 15 minutes. Continuous crawls help ensure freshness of search results because the search index is kept up to date as the SharePoint Server content is crawled so frequently. Thus, continuous crawls are especially useful for crawling SharePoint Server content that is quickly changing.

A single continuous crawl includes all content sources in a Search service application for which continuous crawls are enabled. Similarly, the continuous crawl interval applies to all content sources in the Search service application for which continuous crawls are enabled.

You cannot run multiple full crawls or multiple incremental crawls for the same content source at the same time. However, multiple continuous crawls can run at the same time. Therefore, even if one continuous crawl is processing a large content update, another continuous crawl can start at the predefined time interval and crawl other updates. Continuous crawls of a particular content repository can also occur while a full or incremental crawl is in progress for the same repository.

A continuous crawl doesn’t process or retry items that repeatedly return errors. Such errors are retried during a “clean-up” incremental crawl, which automatically runs every four hours for content sources that have continuous crawl enabled. Items that continue to return errors during the incremental crawl will be retried during future incremental crawls, but will not be picked up by the continuous crawls until the errors are resolved.

You can set incremental crawl times on the Search_Service_Application_Name: Add/Edit Content Source page, but you can change the frequency interval for continuous crawls only by using Microsoft PowerShell.

To enable continuous crawls for an existing content source

  1. Verify that the user account that is performing this procedure is an administrator for the Search service application.
  2. In Central Administration, in the Application Management section, click Manage service applications.
  3. Click the Search service application.
  4. On the Search_Service_Application_Name: Search Administration page, in the Quick Launch, under Crawling, click Content Sources.
  5. On the Search_Service_Application_Name: Manage Content Sources page, click the SharePoint content source for which you want to enable continuous crawl.
  6. In the Crawl Schedules section, select Enable Continuous Crawls.
  7. Click OK.
  8. Verification: On the Search_Service_Application_Name: Manage Content Sources page, verify that the Status column has the status Crawling Continuous.

To enable continuous crawls for a new content source

  1. Verify that the user account that is performing this procedure is an administrator for the Search service application.
  2. In Central Administration, in the Application Management section, click Manage service applications.
  3. Click the Search service application.
  4. On the Search_Service_Application_Name: Search Administration page, in the Quick Launch, under Crawling, click Content Sources.
  5. On the Search_Service_Application_Name: Manage Content Sources page, click New Content Source.
  6. Create a content source of the type SharePoint Sites.
  • In the Name section, type a name in the Name field.
  • In the Content Source Type section, select SharePoint Sites.
  • In the Start Addresses section, type the start address or addresses.
  • In the Crawl Settings section, select the crawling behavior for all start addresses.
  • In the Crawl Schedules section, select Enable Continuous Crawls.
  1. Click OK.
  2. Verification: On the Search_Service_Application_Name: Manage Content Sources page, verify that the newly added content source appears and that the Status column has the status Crawling Continuous.

To disable continuous crawls for a content source

  1. Verify that the user account that is performing this procedure is an administrator for the Search service application.
  2. In Central Administration, in the Application Management section, click Manage service applications.
  3. Click the Search service application.
  4. On the Search_Service_Application_Name: Search Administration page, in the Quick Launch, under Crawling, click Content Sources.
  5. On the Search_Service_Application_Name: Manage Content Sources page, click the SharePoint content source for which you want to disable continuous crawls.
  6. In the Crawl Schedules section, clear Enable Incremental Crawls. This disables continuous crawls.
  7. To confirm that you want to disable continuous crawls, click OK.
  8. Optional: click Edit schedule to change the schedule for incremental crawls, and then click OK.
  9. On the Search_Service_Application_Name: Edit Content Source page, click OK.
  10. Verification: On the Search_Service_Application_Name: Manage Content Sources page, verify that the Status column has changed to Idle. This might take some time, because all URLs that remain in the crawl queue are still crawled after you disable continuous crawls.

To disable continuous crawls for all content sources

  1. Verify that the user account that performs this procedure is an administrator for the Search service application.
  2. Start a SharePoint Management Shell on a server in the farm.
  3. At the Microsoft PowerShell command prompt, type the following commands:

$SSA = Get-SPEnterpriseSearchServiceApplication
$SPContentSources = $SSA | Get-SPEnterpriseSearchCrawlContentSource | WHERE {$_.Type -eq “SharePoint”}
foreach ($cs in $SPContentSources)
{
$cs.EnableContinuousCrawls = $false
$cs.Update()
}

  1. Verification: On the Search_Service_Application_Name: Manage Content Sources page, verify that the Status column has changed to Idle for all content sources. This might take some time, because all URLs that remain in the crawl queue are still crawled after you disable continuous crawls.

To change the continuous crawl interval

  1. Verify that the user account that is performing this procedure is a member of the Farm Administrators group.
  2. Start a SharePoint Management Shell.
  3. At the Microsoft PowerShell command prompt, type the following commands:

$ssa = Get-SPEnterpriseSearchServiceApplication
$ssa.SetProperty(“ContinuousCrawlInterval”,n)

Where:

  • n is the regular interval in minutes at which you want to continuous crawls to start. The default interval is every 15 minutes. The shortest interval that you can set is 1 minute.

NOTE: If you reduce the interval, you increase the load on SharePoint Server and the crawler. Make sure that you plan and scale out for this increased consumption of resources accordingly.

in SharePoint 2010 we had 2 crawls available and it was configurable on our Search Service Application.

  • Full: Crawl all content,
  • Incremental: As the name says, it crawls content that has been modified since the last crawl.

The disadvantage of these crawls, is that once launched, you are not able to launch a second crawl in parallel (on the same content source), and therefore for the content changed in the meantime we will need to wait until the current crawl is finished (crawl and another) to be integrated into the index, and therefore to be found via search.

An example :

  • A incremental crawl named ALFA is started and will last 50 take minutes,
  • After 10 minutes of crawling a new document has been added, so we need a second incremental crawl named BETA to get the document in the index.
  • This item will have to wait at least 40 minutes to be integrated into the index.

 So, we can’t keep an updated index with the latest changes, because latency is invited in each crawling process.

It is possible that in most of cases this operation is suitable and favorable for your clients, but for those who want to search their content immediately or after their integration into SharePoint there is now a new solution in SharePoint: “Continuous Crawl“.

The Continuous Crawl

So resuming: The “Continuous Crawl” is a type of crawl that aims to maintain the index as current as possible.

It’s operation is simple: once activated, it will launch the crawl at regular intervals. The major difference with incremental crawl is that the crawl can run in parallel, and does not expect the previous crawl to complete prior the launch.

Important Points:

  • “Continuous Crawl” is only available for sources of content type “SharePoint Sites”
  •  By default, a new crawl is run every once in 15 minutes, but the SharePoint administrator can change this interval using the PowerShell cmdlet Set-SPEnterpriseSearchCrawlContentSource  ,
  • Once started, a “Continuous Crawl” can’t be paused or stopped, you can just disable it.

If we take our example above with “Continuous Crawl”:

  •  Our ALFA crawl starts and will take at least 50 minutes,
  •  After 10 minutes of crawling an item already crawl is hereby amended, and requires a new crawl.
  •  Crawl “BETA” is launched,
  •  The crawl “BETA” starts in (15-10) minutes,
  •  Therefore this item will not need to wait 5 minutes (instead of 50 minutes) to be integrated into the index.

1- How to Enable it?

In Central Administration, click on “Search Service Application“, and then in the menu, click on the “Content Sources“. 

Click on “New Content Source” at the menu

Chose “SharePoint Sites”

Select “Enable Continuous Crawls”

  • The content source has been created so we can see the status on as “Crawling Continuous”

 2 – How to disable it?

  • From the content source page, chose the option “Enable Incremental Crawls” option. This will disable the continuous crawl.
  • Save changes.

 3 – How to see if it works ?

  • Click on your service application search then “Crawl Log” in the section “Diagnostics”.
  • Select your Content Source and click on “View crawl history”
  • Or via PowerShell Execute the following cmdlets 
  • $SearchSA = «Search Service»
    • Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $SearchSA | select *

Impact on our Servers

The impact of a “Continuous Crawl” is the same as an “Incremental Crawl”.

At the parallel execution of crawls, the “Continuous Crawl” crawls within the parameters defined in the “Crawler Impact Rule” which controls the maximum number of requests that can be executed by the server (default 8).

Note: this setting does not restrict the Content Processing component, only the rate at which links are added to the Crawl Queue.


Content Processing uses 3 threads per core by default (called Processing Flows). To restrict Content Processing impact, use ProwerShell to set the NumberOfCssFeedersPerCPUForRegularCrawl property on the Search Service Application object.

Ref: http://blogs.technet.com/b/searchguys/archive/2013/02/19/content-processing-performance-scaling.aspx 

https://social.technet.microsoft.com/wiki/contents/articles/15571.sharepoint-2013-continuous-crawl-and-the-difference-between-incremental-and-continuous-crawl.aspx

While creating New Secure Store Target Application ID, when I tried to populate people and groups I was seeing below error.

Tried by restarting Secure Store service from Manage Services in the server, no luck.

Created new Secure Store Service application and tried, no luck.

When I googled it, in many blogs, it was written as need to make “Alternate Access Mapping” entry, but it did not worked for me.

Thought logically and tried to understand the error and based on the message, I searched for ldapmemberprovider entries in Central Admin web.config file.

Noticed that, those entries were not in-place.

Made correct entries and issue was resolved.

Now I’m able to expected behavior as below.

https://docs.microsoft.com/en-US/troubleshoot/iis/http-status-code

https://docs.microsoft.com/en-us/iis/troubleshoot/performance-issues/troubleshooting-iis-performance-issues-or-application-errors-using-logparser

IIS 7 and later have a similar HTTP request-processing flow as IIS 6.0. The diagrams in this section provide an overview of an HTTP request in process.

The following list describes the request-processing flow that is shown in Figure 1:

  1. When a client browser initiates an HTTP request for a resource on the Web server, HTTP.sys intercepts the request.
  2. HTTP.sys contacts WAS to obtain information from the configuration store.
  3. WAS requests configuration information from the configuration store, applicationHost.config.
  4. The WWW Service receives configuration information, such as application pool and site configuration.
  5. The WWW Service uses the configuration information to configure HTTP.sys.
  6. WAS starts a worker process for the application pool to which the request was made.
  7. The worker process processes the request and returns a response to HTTP.sys.
  8. The client receives a response.
HTTP Strict Transport Security (HSTS)

HTTP Strict Transport Security (HSTS), specified in RFC 6797, allows a website to declare itself as a secure host and to inform browsers that it should be contacted only through HTTPS connections. IIS 10.0 Version 1709 introduces turn-key support for enabling HSTS without the need for error-prone URL rewrite rules.

Learn more: HSTS

Container Enhancements

IIS 10.0 Version 1709 introduces improvements that allow you to run the IIS worker process (w3wp.exe) directly as well as changes to the Central Certificate Provider (CCS) that makes it more amenable for running in containers.

IISAdministration PowerShell cmdlets

IIS 10.0 introduced the IIS Administration PowerShell cmdlets. Version 1709 ships with iterative improvements and three new cmdlets: Get-IISSiteBindingNew-IISSiteBinding, and Remove-IISSiteBinding. Additionally, Microsoft have done work to ship the latest version of IISAdministration on PowerShell Gallery available for use with Windows Server 2012 and above.

Learn more: IIS Administration in the PowerShell Gallery

Logging Enhancements

In IIS 10.0 Version 1709, Microsoft introduced new server variables for the Cryptographic Protocol, the Cipher algorithm, the Key Exchange Algorithm, and the Message Authentication Algorithm. These new variables have been documented in the list of IIS Server Variables

Ref: https://docs.microsoft.com/en-us/iis/get-started/whats-new-in-iis-10-version-1709/new-features-introduced-in-iis-10-1709