archive-be.com » BE » B » BENNYMICHIELSEN.BE

Total: 348

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • build – Benny Michielsen
    had to be run as Administrator Long story short I added the force attribute and suddenly the issue was resolved Since I wanted to write this down I had to reproduce the issue So I removed the attribute again but the build somehow kept building I had also deleted the failed builds so I have no log anymore to illustrate the issue Something has changed though as the builds now take much longer than before even without the force attribute You can see that in the image below Hopefully I don t run into it again I added the force attribute after reading this GitHub issue Author BennyM Posted on 29 02 2016 29 02 2016 Categories Software Development Tags build vso Leave a comment on Running Gulp and NPM in VSO Build fails with Run as Administrator message Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The Morning Brew Chris Alcock The Morning Brew 2052 on Getting up and running with Mono and Raspberry Pi 3 bhldev on Exposing iCal

    Original URL path: http://blog.bennymichielsen.be/tag/build/ (2016-04-29)
    Open archived version from archive


  • vso – Benny Michielsen
    had to be run as Administrator Long story short I added the force attribute and suddenly the issue was resolved Since I wanted to write this down I had to reproduce the issue So I removed the attribute again but the build somehow kept building I had also deleted the failed builds so I have no log anymore to illustrate the issue Something has changed though as the builds now take much longer than before even without the force attribute You can see that in the image below Hopefully I don t run into it again I added the force attribute after reading this GitHub issue Author BennyM Posted on 29 02 2016 29 02 2016 Categories Software Development Tags build vso Leave a comment on Running Gulp and NPM in VSO Build fails with Run as Administrator message Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The Morning Brew Chris Alcock The Morning Brew 2052 on Getting up and running with Mono and Raspberry Pi 3 bhldev on Exposing iCal

    Original URL path: http://blog.bennymichielsen.be/tag/vso/ (2016-04-29)
    Open archived version from archive

  • Session: Storm with HDInsight – Benny Michielsen
    divided in three parts a introduction a deep dive to give an overview of the main concepts of a Storm topology and then several scenario s and how they can be solved The deep dive centered around creating a Twitter battle where hashtags were counted and the results then displayed on a website You can find the code on my GitHub account Author BennyM Posted on 20 02 2016 Categories Software Development Tags hdinsight session storm Leave a Reply Cancel reply Your email address will not be published Required fields are marked Comment Name Email Website Post navigation Previous Previous post Scaling an Azure Event Hub Throughput units Next Next post Running Gulp and NPM in VSO Build fails with Run as Administrator message Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The Morning Brew Chris Alcock The Morning Brew 2052 on Getting up and running with Mono and Raspberry Pi 3 bhldev on Exposing iCal data in WebAPI Archives March 2016 February 2016 August 2015 July 2015 June 2015 May

    Original URL path: http://blog.bennymichielsen.be/2016/02/20/session-storm-with-hdinsight/ (2016-04-29)
    Open archived version from archive

  • hdinsight – Benny Michielsen
    Values address Address else existingAccount BreachedIn breachedAccount BreachedIn TableOperation insertOperation TableOperation Replace existingAccount table Execute insertOperation this ctx Ack tuple catch Exception ex Context Logger Error eventData ToString Context Logger Error ex Message this ctx Fail tuple else Context Logger Info empty address this ctx Ack tuple The tuple is the JSON object which was send to the Azure event hub in my previous post It contains the email address and the website which was breached Table storage is queried to see if the email address is already present if this is the case the entry is updated otherwise the new account is added to the system Only new accounts are emitted for further processing The second bolt in the system will insert new accounts into a SQL database Its execute method is quite different public void Execute SCPTuple tuple Check if the tuple is of type Tick Tuple if IsTickTuple tuple if DateTime UtcNow lastRun TimeSpan FromSeconds batchIntervalInSec Context Logger Info Time to purge FinishBatch else breachedAccounts Add tuple if breachedAccounts Count batchSize Context Logger Info Max reached time to purge FinishBatch else Context Logger Info Not yet time to purge This bolt does not process every tuple it receives immediately Instead it keeps a list of tuples ready for processing when either a maximum count is reached or a specific amount of time has passed since the last purge In the FinishBatch method a SqlBulk copy is performed to insert records in a SQL database private void FinishBatch lastRun DateTime UtcNow DataTable table new DataTable table Columns Add Email if breachedAccounts Any foreach var emailAddress in breachedAccounts var row table NewRow row Email emailAddress GetString 0 table Rows Add row try using SqlBulkCopy bulkCopy new SqlBulkCopy connection bulkCopy DestinationTableName Emails bulkCopy ColumnMappings Add Email Email bulkCopy WriteToServer table foreach var emailAddress in breachedAccounts ctx Ack emailAddress catch Exception ex foreach var emailAddress in breachedAccounts ctx Fail emailAddress Context Logger Error ex Message finally breachedAccounts Clear Both the table storage bolt and the SQL bulk copy bolt have lines with ctx Ack Fail or Emit These are places where you can indicate whether processing of a tuple has succeeded failed or you are emitting a new tuple to be processed further downstream It also enables to replay tuples If the SQL bulk copy fails the tuple will be processed again at a later time The azure table spout expects your bolts to ack topologyBuilder SetBolt TableStorageBolt TableStorageBolt Get new Dictionary string List Constants DEFAULT STREAM ID new List address 256 true DeclareCustomizedJavaSerializer javaSerializerInfo shuffleGrouping EventHubSpout topologyBuilder SetBolt SQLBolt SQLStorageBolt Get new Dictionary string List 2 true shuffleGrouping TableStorageBolt Constants DEFAULT STREAM ID addConfigurations new Dictionary string string topology tick tuple freq secs 1 topologyBuilder SetTopologyConfig new Dictionary string string topology workers partitionCount ToString The two bolts also need to be added to the topology The tableStorageBolt is configured with a name a method which can be used to create instances and a description of its output In this case

    Original URL path: http://blog.bennymichielsen.be/tag/hdinsight/ (2016-04-29)
    Open archived version from archive

  • session – Benny Michielsen
    Storm on Azure Data Lake HDInsight by bennymichielsen pic twitter com o0WzMbYgMh Tom Kerkhove TomKerkhove February 4 2016 My talk was divided in three parts a introduction a deep dive to give an overview of the main concepts of a Storm topology and then several scenario s and how they can be solved The deep dive centered around creating a Twitter battle where hashtags were counted and the results then displayed on a website You can find the code on my GitHub account Author BennyM Posted on 20 02 2016 Categories Software Development Tags hdinsight session storm Leave a comment on Session Storm with HDInsight Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The Morning Brew Chris Alcock The Morning Brew 2052 on Getting up and running with Mono and Raspberry Pi 3 bhldev on Exposing iCal data in WebAPI Archives March 2016 February 2016 August 2015 July 2015 June 2015 May 2015 April 2015 March 2014 November 2013 August 2012 July 2012 January 2012 November 2011 June 2011 May 2011

    Original URL path: http://blog.bennymichielsen.be/tag/session/ (2016-04-29)
    Open archived version from archive

  • storm – Benny Michielsen
    Values address Address else existingAccount BreachedIn breachedAccount BreachedIn TableOperation insertOperation TableOperation Replace existingAccount table Execute insertOperation this ctx Ack tuple catch Exception ex Context Logger Error eventData ToString Context Logger Error ex Message this ctx Fail tuple else Context Logger Info empty address this ctx Ack tuple The tuple is the JSON object which was send to the Azure event hub in my previous post It contains the email address and the website which was breached Table storage is queried to see if the email address is already present if this is the case the entry is updated otherwise the new account is added to the system Only new accounts are emitted for further processing The second bolt in the system will insert new accounts into a SQL database Its execute method is quite different public void Execute SCPTuple tuple Check if the tuple is of type Tick Tuple if IsTickTuple tuple if DateTime UtcNow lastRun TimeSpan FromSeconds batchIntervalInSec Context Logger Info Time to purge FinishBatch else breachedAccounts Add tuple if breachedAccounts Count batchSize Context Logger Info Max reached time to purge FinishBatch else Context Logger Info Not yet time to purge This bolt does not process every tuple it receives immediately Instead it keeps a list of tuples ready for processing when either a maximum count is reached or a specific amount of time has passed since the last purge In the FinishBatch method a SqlBulk copy is performed to insert records in a SQL database private void FinishBatch lastRun DateTime UtcNow DataTable table new DataTable table Columns Add Email if breachedAccounts Any foreach var emailAddress in breachedAccounts var row table NewRow row Email emailAddress GetString 0 table Rows Add row try using SqlBulkCopy bulkCopy new SqlBulkCopy connection bulkCopy DestinationTableName Emails bulkCopy ColumnMappings Add Email Email bulkCopy WriteToServer table foreach var emailAddress in breachedAccounts ctx Ack emailAddress catch Exception ex foreach var emailAddress in breachedAccounts ctx Fail emailAddress Context Logger Error ex Message finally breachedAccounts Clear Both the table storage bolt and the SQL bulk copy bolt have lines with ctx Ack Fail or Emit These are places where you can indicate whether processing of a tuple has succeeded failed or you are emitting a new tuple to be processed further downstream It also enables to replay tuples If the SQL bulk copy fails the tuple will be processed again at a later time The azure table spout expects your bolts to ack topologyBuilder SetBolt TableStorageBolt TableStorageBolt Get new Dictionary string List Constants DEFAULT STREAM ID new List address 256 true DeclareCustomizedJavaSerializer javaSerializerInfo shuffleGrouping EventHubSpout topologyBuilder SetBolt SQLBolt SQLStorageBolt Get new Dictionary string List 2 true shuffleGrouping TableStorageBolt Constants DEFAULT STREAM ID addConfigurations new Dictionary string string topology tick tuple freq secs 1 topologyBuilder SetTopologyConfig new Dictionary string string topology workers partitionCount ToString The two bolts also need to be added to the topology The tableStorageBolt is configured with a name a method which can be used to create instances and a description of its output In this case

    Original URL path: http://blog.bennymichielsen.be/tag/storm/ (2016-04-29)
    Open archived version from archive

  • Scaling an Azure Event Hub: Throughput units – Benny Michielsen
    TU In the picture below you can see that I had one cloud service pushing messages to an event hub until 10 00 I then scaled out the service to 20 instances This resulted in about twice the amount of messages being sent from 200k to 400k not really what you expect I was also getting more errors from time to time the event hub was sending back server busy messages At about 10 30 I increased the TU from 1 to 3 this not only stopped the errors from occurring but further increased the throughput from 400k to over 1 million messages being received on the event hub per 5 minutes Author BennyM Posted on 11 08 2015 12 08 2015 Categories Software Development Tags azure cloud event hub 3 thoughts on Scaling an Azure Event Hub Throughput units Pingback The Morning Brew Chris Alcock The Morning Brew 1923 Arthur says 12 08 2015 at 16 26 I am curious if it possible to bump the TU value up upon receiving several error messages automatically programmatically Reply BennyM says 26 01 2016 at 22 45 I looked into this around the time you placed the comment There seems no support for this at this moment Even not in the alerts you can set Reply Leave a Reply Cancel reply Your email address will not be published Required fields are marked Comment Name Email Website Post navigation Previous Previous post Partitioning and wildcards in an Azure Data Factory pipeline Next Next post Session Storm with HDInsight Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The Morning Brew Chris Alcock The Morning

    Original URL path: http://blog.bennymichielsen.be/2015/08/11/scaling-an-azure-event-hub-throughput-units/ (2016-04-29)
    Open archived version from archive

  • azure – Benny Michielsen
    help and intellisense control space And with all this deployed the folder is being synchronized online This is probably one of the most expensive file copy samples In a next post I ll investigate more features One of the drawbacks of the current setup is that every time the sync runs all files are overwritten All files are copied all the time as is Sources Enable your pipelines to work with on premises data Data Factory JSON Scripting Reference Author BennyM Posted on 30 06 2015 01 07 2015 Categories Software Development Tags azure data factory Leave a comment on Copying files with Azure Data Factory Using HDInsight Storm to process 100 million events In my last post I threw 100 randomly generated email addresses to an Azure event hub and it didn t even budge Now it s time to process the data and store it in a setup that resembles haveibeenpwned With the events now in the event hub I could create another cloud service which uses the NET SDK to read the data There are two ways to implement this either with the EventHubReceiver class or with the EventProcessorHost I ll try these out in future post For now I wanted to use something else HDInsight Storm HDInsight is Hadoop as PaaS Storm is a computation system to process streams of data and as advertised on their website should be able to process millions of tuples per second Sounds like a good way to handle the 100 million e mail addresses I have waiting In order to use Storm you need to understand only a few concepts Tuple a piece of information that needs to be processed Spout reads from a stream and returns tuples Bolt processes tuples and can forward or produce new tuples Topology links spouts and bolts together This is a very rudimentary explanation but should be enough for this post In order to write and publish a topology in Visual Studio you should have the Azure SDK and the HDInsight tools for Visual Studio Let s look at the different components we need The spout will be reading from our event hub Microsoft has already written a spout which you can use However it is written in Java Storm is not bound to one language a core feature and with HDInsight you can have hybrid topologies Any work you have already invested in Storm can be reused Java and C spouts and bolts can even work together Let s look at the configuration that we need to get our spout up and running TopologyBuilder topologyBuilder new TopologyBuilder EmailProcessingTopology int partitionCount Properties Settings Default EventHubPartitionCount JavaComponentConstructor constructor JavaComponentConstructor CreateFromClojureExpr String Format com microsoft eventhubs spout EventHubSpout com microsoft eventhubs spout EventHubSpoutConfig 0 1 2 3 4 5 Properties Settings Default EventHubPolicyName Properties Settings Default EventHubPolicyKey Properties Settings Default EventHubNamespace Properties Settings Default EventHubName partitionCount topologyBuilder SetJavaSpout EventHubSpout constructor partitionCount This code can be found in the samples of the HDInsight team and is pretty straightforward Create an eventhubspout and add it to the topology The partitionCount indicates how many executors and tasks there should be and i t s suggested that this should match the amount of partitions of your event hub It gets more interesting in the first bolt A bolt is a class that implements ISCPBolt This interface has one method Execute which receives a tuple public void Execute SCPTuple tuple string emailAddress string tuple GetValue 0 if string IsNullOrWhiteSpace emailAddress JObject eventData JObject Parse emailAddress try var address new System Net Mail MailAddress string eventData address var leakedIn string eventData leak var breachedAccount new BreachedAccount address leakedIn var retrieveOperation TableOperation Retrieve breachedAccount PartitionKey breachedAccount RowKey var retrievedResult table Execute retrieveOperation var existingAccount Device retrievedResult Result if existingAccount null TableOperation insertOperation TableOperation Insert new BreachedAccount address leakedIn table Execute insertOperation ctx Emit Constants DEFAULT STREAM ID new tuple new Values address Address else existingAccount BreachedIn breachedAccount BreachedIn TableOperation insertOperation TableOperation Replace existingAccount table Execute insertOperation this ctx Ack tuple catch Exception ex Context Logger Error eventData ToString Context Logger Error ex Message this ctx Fail tuple else Context Logger Info empty address this ctx Ack tuple The tuple is the JSON object which was send to the Azure event hub in my previous post It contains the email address and the website which was breached Table storage is queried to see if the email address is already present if this is the case the entry is updated otherwise the new account is added to the system Only new accounts are emitted for further processing The second bolt in the system will insert new accounts into a SQL database Its execute method is quite different public void Execute SCPTuple tuple Check if the tuple is of type Tick Tuple if IsTickTuple tuple if DateTime UtcNow lastRun TimeSpan FromSeconds batchIntervalInSec Context Logger Info Time to purge FinishBatch else breachedAccounts Add tuple if breachedAccounts Count batchSize Context Logger Info Max reached time to purge FinishBatch else Context Logger Info Not yet time to purge This bolt does not process every tuple it receives immediately Instead it keeps a list of tuples ready for processing when either a maximum count is reached or a specific amount of time has passed since the last purge In the FinishBatch method a SqlBulk copy is performed to insert records in a SQL database private void FinishBatch lastRun DateTime UtcNow DataTable table new DataTable table Columns Add Email if breachedAccounts Any foreach var emailAddress in breachedAccounts var row table NewRow row Email emailAddress GetString 0 table Rows Add row try using SqlBulkCopy bulkCopy new SqlBulkCopy connection bulkCopy DestinationTableName Emails bulkCopy ColumnMappings Add Email Email bulkCopy WriteToServer table foreach var emailAddress in breachedAccounts ctx Ack emailAddress catch Exception ex foreach var emailAddress in breachedAccounts ctx Fail emailAddress Context Logger Error ex Message finally breachedAccounts Clear Both the table storage bolt and the SQL bulk copy bolt have lines with ctx Ack Fail or Emit These are places where you can indicate

    Original URL path: http://blog.bennymichielsen.be/tag/azure/ (2016-04-29)
    Open archived version from archive