Creating a ScriptComponentProject



This snippet shows a ScriptComponentProject that accesses Azure DataMarket data, retrieves the data, and then stores it as new rows in the table.

published 09.09.12

last updated 09.09.12



<Biml xmlns="">
        <ScriptComponentProject ProjectCoreName="SC_2bca370105ff4883a705860bac68cfba" Name="LoadCrimeDataFeed">
            <AssemblyReference AssemblyPath="Microsoft.SqlServer.DTSPipelineWrap.dll" />
            <AssemblyReference AssemblyPath="Microsoft.SqlServer.DTSRuntimeWrap.dll" />
            <AssemblyReference AssemblyPath="Microsoft.SqlServer.PipelineHost.dll" />
            <AssemblyReference AssemblyPath="Microsoft.SqlServer.TxScript.dll" />
            <AssemblyReference AssemblyPath="System.dll" />
            <AssemblyReference AssemblyPath="System.Data.dll" />
            <AssemblyReference AssemblyPath="System.Windows.Forms.dll" />
            <AssemblyReference AssemblyPath="System.Xml.dll" />
            <AssemblyReference AssemblyPath="System.Core, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL, Custom=null" />
            <AssemblyReference AssemblyPath="System.Data.Services, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL, Custom=null" />
            <AssemblyReference AssemblyPath="System.Data.DataSetExtensions, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL, Custom=null" />
            <AssemblyReference AssemblyPath="System.Data.Services.Client, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL, Custom=null" />
            <OutputBuffer Name="ODataOutput" IsSynchronous="false">
                    <Column Name="ROWID" DataType="Int32" />
                    <Column Name="State" DataType="String" Length="20" />
                    <Column Name="City" DataType="String" Length="50" />
                    <Column Name="Year" DataType="Int32" />
                    <Column Name="Population" DataType="Int32" />
                    <Column Name="PropertyCrime" DataType="Int32" />
                    <Column Name="Burglary" DataType="Int32" />
                    <Column Name="LarcenyTheft" DataType="Int32" />
                    <Column Name="MotorVehicleTheft" DataType="Int32" />
                    <Column Name="Arson" DataType="Int32" />
            <File Path="AssemblyInfo.cs">
using System.Reflection;
using System.Runtime.CompilerServices;
// General Information about an assembly is controlled through the following 
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("SC_2bca370105ff4883a705860bac68cfba.csproj")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("Varigence")]
[assembly: AssemblyProduct("SC_2bca370105ff4883a705860bac68cfba.csproj")]
[assembly: AssemblyCopyright("Copyright @ Varigence 2011")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]
// Version information for an assembly consists of the following four values:
//      Major Version
//      Minor Version 
//      Build Number
//      Revision
// You can specify all the values or you can default the Revision and Build Numbers 
// by using the '*' as shown below:
[assembly: AssemblyVersion("1.0.*")]		
            <File Path="main.cs"> 
/* Microsoft SQL Server Integration Services Script Component
*  Write scripts using Microsoft Visual C# 2008.
*  ScriptMain is the entry point class of the script.*/
using System;
using System.Data;
using System.Linq;
using System.Net;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
public class ScriptMain : UserComponent
    public override void PreExecute()
    public override void PostExecute()
    public override void CreateNewOutputRows()
        var context = InitializeDataSource();
         int skip = 0;
         const int take = 100;
        while (true)
            var rows = context.CityCrime.Skip(skip).Take(take);
            if (rows.Count() == 0)
            foreach (var row in rows)
                this.ODataOutputBuffer.Arson = row.Arson;
                this.ODataOutputBuffer.Burglary = row.Burglary;
                this.ODataOutputBuffer.City = row.City;
                this.ODataOutputBuffer.LarcenyTheft = row.LarcenyTheft;
                this.ODataOutputBuffer.MotorVehicleTheft = row.MotorVehicleTheft;
                this.ODataOutputBuffer.Population = row.Population;
                this.ODataOutputBuffer.PropertyCrime = row.PropertyCrime;
                this.ODataOutputBuffer.ROWID = row.ROWID; 
                this.ODataOutputBuffer.State = row.State;
                this.ODataOutputBuffer.Year = row.Year;
            skip += take;
    private datagovCrimesContainer InitializeDataSource()
        var context = new datagovCrimesContainer(new Uri(""));     
        context.Credentials = new NetworkCredential(Variables.UserName, Variables.AccountKey);    
        return context;
                <Variable Namespace="User" DataType="String" VariableName="UserName" />
                <Variable Namespace="User" DataType="String" VariableName="AccountKey" />
You are not authorized to comment. A verification email has been sent to your email address. Please verify your account.




3:11pm 04.28.15

Hi Cell

Is there any way to dynamically set the input and output columns in an asynchronous script transformation component? We are developing a metadata driven solution where data is required to be deduplicated in a number of different ways, depending on use case, and duplicate rows need to be redirected to another destination. The metadata defines which columns to use as a key for deduplication, but not all the columns expected for the script component input buffer.

The generated packages could have derived columns, lookups, etc, in the data pipeline and there could be any number of deduplication operations in one package so it's not viable to define the entire set of input columns for each deduplication operation. It would be great if there was a flag, as in several other BIML components, that could be set to automatically pass through all upstream columns. Is it possible to iterate through a collection of pipeline columns, output columns from the previous component or any other way of identifying and defining the input columns (which in my case would be the same as the script output columns)?

Any suggestions would be very much appreciated.

Many thanks Ian