archive-be.com » BE » B » BENNYMICHIELSEN.BE

Total: 348

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • nhibernate – Benny Michielsen
    on 05 01 2009 Categories Software Development Tags fluent nhibernate nhibernate 3 Comments on Mapping a list of components with Fluent NHibernate Using Fluent NHibernate in Spring Net In order to load the mappings you ve written using Fluent NHibernate you need to call the extension method AddMappingsFromAssembly on the configuration The LocalSessionFactoryObject defined in Spring Net supports out of the box the loading of hbm files from an assembly or a location in the file system Luckily this class can be extended with ease The code below is all you need to use Fluent NHibernate with Spring Net any suggestions for a better name are welcome using System using System Collections Generic using System Linq using System Text using Spring Data NHibernate using FluentNHibernate using System Reflection using NHibernate Cfg namespace SessionFactories public class FluentNhibernateLocalSessionFactoryObject LocalSessionFactoryObject summary Sets the assemblies to load that contain fluent nhibernate mappings summary value The mapping assemblies value public string FluentNhibernateMappingAssemblies get set protected override void PostProcessConfiguration Configuration config base PostProcessConfiguration config if FluentNhibernateMappingAssemblies null foreach string assemblyName in FluentNhibernateMappingAssemblies config AddMappingsFromAssembly Assembly Load assemblyName Update 29 June please check the comments since a new version brought some changes to this blog post Author BennyM Posted on 04 01 2009 Categories Software Development Spring NET Tags fluent nhibernate nhibernate 20 Comments on Using Fluent NHibernate in Spring Net Using Spring Net SQLite and NHibernate I was planning to put a quick spike together on putting single sign on using CardSpace OpenID and Windows Live into an application to test it out for a project at work Maybe it was because of the weekend but I was quite enthusiastic and added several technologies to the sample which I had never used before Six hours later there was still no single sign on or even a fully working sample application to add the behaviour to So instead of posting one post I ll have a mini series where SSO will actually be a side track Getting the database set up was one of the first things I wanted to do I didn t want to add a MS SQL database to the project since that would require anyone who downloaded this project to have the database engine running On various other blogs and Net sites I ve read there was talk about a lightweight alternative in the form of Sqlite The net provider can be downloaded here and boy was I lucky the day I wanted to try it out they released a new version Their latest version includes designer support in Visual Studio Halfway august the Spring Net team had also released a new version 1 2 M1 so I grabbed that version and NHibernate went 2 0 GA as well Since the Spring Net assemblies are strongly signed they expected SQLite 1 0 56 and not 1 0 58 which was the one I downloaded To make use of the new dll s I added this to my configuration db provider id DbProvider provider

    Original URL path: http://blog.bennymichielsen.be/tag/nhibernate/ (2016-04-29)
    Open archived version from archive


  • Partitioning and wildcards in an Azure Data Factory pipeline – Benny Michielsen
    if your files are all being stored in one folder but each file has the time in its filename myFile 2015 07 01 txt you can t create a filter with the dynamic partitions myFile Year Month Day txt It s only possible to use the partitionedBy section in the folder structure as shown above If you think this is a nice feature go vote here The price of the current setup is determined by a couple of things First we have a low frequency activity that s an activity that runs daily or less The first 5 are free so we have 25 activities remaining The pricing of an activity is determined on the place where it occurs on premise or in the cloud I m assuming here it s an on premise activity since the files are not located in Azure I ve asked around if this assumption is correct but don t have a response yet The pricing of an on premise activity is 0 5586 per activity So that would mean almost 14 for this daily snapshot each month If we modified everything to run hourly we d have to pay 554 80 per month You can find more info on the pricing on their website In this scenario I ve demonstrated how to get started with Azure Data Factory The real power however lies in the transformation steps which you can add Instead of doing a simple copy the data can be read combined and stored in many different forms A topic for a future post Upside Rich editor Fairly easy to get started No custom application to write On premise support via the Data Management Gateway No firewall settings need to be changed Downside Can get quite expensive Author BennyM Posted on 07 07

    Original URL path: http://blog.bennymichielsen.be/2015/07/07/partitioning-and-wildcards-in-an-azure-data-factory-pipeline/?replytocom=48318 (2016-04-29)
    Open archived version from archive

  • Using HDInsight & Storm to process 100 million events – Benny Michielsen
    breachedAccounts Count batchSize Context Logger Info Max reached time to purge FinishBatch else Context Logger Info Not yet time to purge This bolt does not process every tuple it receives immediately Instead it keeps a list of tuples ready for processing when either a maximum count is reached or a specific amount of time has passed since the last purge In the FinishBatch method a SqlBulk copy is performed to insert records in a SQL database private void FinishBatch lastRun DateTime UtcNow DataTable table new DataTable table Columns Add Email if breachedAccounts Any foreach var emailAddress in breachedAccounts var row table NewRow row Email emailAddress GetString 0 table Rows Add row try using SqlBulkCopy bulkCopy new SqlBulkCopy connection bulkCopy DestinationTableName Emails bulkCopy ColumnMappings Add Email Email bulkCopy WriteToServer table foreach var emailAddress in breachedAccounts ctx Ack emailAddress catch Exception ex foreach var emailAddress in breachedAccounts ctx Fail emailAddress Context Logger Error ex Message finally breachedAccounts Clear Both the table storage bolt and the SQL bulk copy bolt have lines with ctx Ack Fail or Emit These are places where you can indicate whether processing of a tuple has succeeded failed or you are emitting a new tuple to be processed further downstream It also enables to replay tuples If the SQL bulk copy fails the tuple will be processed again at a later time The azure table spout expects your bolts to ack topologyBuilder SetBolt TableStorageBolt TableStorageBolt Get new Dictionary string List Constants DEFAULT STREAM ID new List address 256 true DeclareCustomizedJavaSerializer javaSerializerInfo shuffleGrouping EventHubSpout topologyBuilder SetBolt SQLBolt SQLStorageBolt Get new Dictionary string List 2 true shuffleGrouping TableStorageBolt Constants DEFAULT STREAM ID addConfigurations new Dictionary string string topology tick tuple freq secs 1 topologyBuilder SetTopologyConfig new Dictionary string string topology workers partitionCount ToString The two bolts also need to be added to the topology The tableStorageBolt is configured with a name a method which can be used to create instances and a description of its output In this case I m emitting the email address to the default stream The parallelism hint is also configured and the boolean flag which indicates this bolt supports acking Since this bolt sits behind the Java spout we also need to configure how the data of the spout can be deserialized The bolt is also configured to use shuffleGrouping meaning that the load will be spread evenly among all instances The SQLBolt is configured accordingly with the additional configuration to support time ticks After I created an HDInsight Storm cluster on Azure I was able to publish the topology from within Visual Studio The cluster itself was created in 15 minutes It took quite some time to find the right parallelism for this topology but with the code I ve posted I got these results after one hour Table storage is slow the 2 SQL bolts are able to keep up with the 256 table storage bolts After one hour my topology was able to process only 10 million records Even though I was

    Original URL path: http://blog.bennymichielsen.be/2015/06/08/using-hdinsight-storm-to-process-100-million-events/?replytocom=34522 (2016-04-29)
    Open archived version from archive

  • Using HDInsight & Storm to process 100 million events – Benny Michielsen
    breachedAccounts Count batchSize Context Logger Info Max reached time to purge FinishBatch else Context Logger Info Not yet time to purge This bolt does not process every tuple it receives immediately Instead it keeps a list of tuples ready for processing when either a maximum count is reached or a specific amount of time has passed since the last purge In the FinishBatch method a SqlBulk copy is performed to insert records in a SQL database private void FinishBatch lastRun DateTime UtcNow DataTable table new DataTable table Columns Add Email if breachedAccounts Any foreach var emailAddress in breachedAccounts var row table NewRow row Email emailAddress GetString 0 table Rows Add row try using SqlBulkCopy bulkCopy new SqlBulkCopy connection bulkCopy DestinationTableName Emails bulkCopy ColumnMappings Add Email Email bulkCopy WriteToServer table foreach var emailAddress in breachedAccounts ctx Ack emailAddress catch Exception ex foreach var emailAddress in breachedAccounts ctx Fail emailAddress Context Logger Error ex Message finally breachedAccounts Clear Both the table storage bolt and the SQL bulk copy bolt have lines with ctx Ack Fail or Emit These are places where you can indicate whether processing of a tuple has succeeded failed or you are emitting a new tuple to be processed further downstream It also enables to replay tuples If the SQL bulk copy fails the tuple will be processed again at a later time The azure table spout expects your bolts to ack topologyBuilder SetBolt TableStorageBolt TableStorageBolt Get new Dictionary string List Constants DEFAULT STREAM ID new List address 256 true DeclareCustomizedJavaSerializer javaSerializerInfo shuffleGrouping EventHubSpout topologyBuilder SetBolt SQLBolt SQLStorageBolt Get new Dictionary string List 2 true shuffleGrouping TableStorageBolt Constants DEFAULT STREAM ID addConfigurations new Dictionary string string topology tick tuple freq secs 1 topologyBuilder SetTopologyConfig new Dictionary string string topology workers partitionCount ToString The two bolts also need to be added to the topology The tableStorageBolt is configured with a name a method which can be used to create instances and a description of its output In this case I m emitting the email address to the default stream The parallelism hint is also configured and the boolean flag which indicates this bolt supports acking Since this bolt sits behind the Java spout we also need to configure how the data of the spout can be deserialized The bolt is also configured to use shuffleGrouping meaning that the load will be spread evenly among all instances The SQLBolt is configured accordingly with the additional configuration to support time ticks After I created an HDInsight Storm cluster on Azure I was able to publish the topology from within Visual Studio The cluster itself was created in 15 minutes It took quite some time to find the right parallelism for this topology but with the code I ve posted I got these results after one hour Table storage is slow the 2 SQL bolts are able to keep up with the 256 table storage bolts After one hour my topology was able to process only 10 million records Even though I was

    Original URL path: http://blog.bennymichielsen.be/2015/06/08/using-hdinsight-storm-to-process-100-million-events/?replytocom=34524 (2016-04-29)
    Open archived version from archive

  • Using HDInsight & Storm to process 100 million events – Benny Michielsen
    breachedAccounts Count batchSize Context Logger Info Max reached time to purge FinishBatch else Context Logger Info Not yet time to purge This bolt does not process every tuple it receives immediately Instead it keeps a list of tuples ready for processing when either a maximum count is reached or a specific amount of time has passed since the last purge In the FinishBatch method a SqlBulk copy is performed to insert records in a SQL database private void FinishBatch lastRun DateTime UtcNow DataTable table new DataTable table Columns Add Email if breachedAccounts Any foreach var emailAddress in breachedAccounts var row table NewRow row Email emailAddress GetString 0 table Rows Add row try using SqlBulkCopy bulkCopy new SqlBulkCopy connection bulkCopy DestinationTableName Emails bulkCopy ColumnMappings Add Email Email bulkCopy WriteToServer table foreach var emailAddress in breachedAccounts ctx Ack emailAddress catch Exception ex foreach var emailAddress in breachedAccounts ctx Fail emailAddress Context Logger Error ex Message finally breachedAccounts Clear Both the table storage bolt and the SQL bulk copy bolt have lines with ctx Ack Fail or Emit These are places where you can indicate whether processing of a tuple has succeeded failed or you are emitting a new tuple to be processed further downstream It also enables to replay tuples If the SQL bulk copy fails the tuple will be processed again at a later time The azure table spout expects your bolts to ack topologyBuilder SetBolt TableStorageBolt TableStorageBolt Get new Dictionary string List Constants DEFAULT STREAM ID new List address 256 true DeclareCustomizedJavaSerializer javaSerializerInfo shuffleGrouping EventHubSpout topologyBuilder SetBolt SQLBolt SQLStorageBolt Get new Dictionary string List 2 true shuffleGrouping TableStorageBolt Constants DEFAULT STREAM ID addConfigurations new Dictionary string string topology tick tuple freq secs 1 topologyBuilder SetTopologyConfig new Dictionary string string topology workers partitionCount ToString The two bolts also need to be added to the topology The tableStorageBolt is configured with a name a method which can be used to create instances and a description of its output In this case I m emitting the email address to the default stream The parallelism hint is also configured and the boolean flag which indicates this bolt supports acking Since this bolt sits behind the Java spout we also need to configure how the data of the spout can be deserialized The bolt is also configured to use shuffleGrouping meaning that the load will be spread evenly among all instances The SQLBolt is configured accordingly with the additional configuration to support time ticks After I created an HDInsight Storm cluster on Azure I was able to publish the topology from within Visual Studio The cluster itself was created in 15 minutes It took quite some time to find the right parallelism for this topology but with the code I ve posted I got these results after one hour Table storage is slow the 2 SQL bolts are able to keep up with the 256 table storage bolts After one hour my topology was able to process only 10 million records Even though I was

    Original URL path: http://blog.bennymichielsen.be/2015/06/08/using-hdinsight-storm-to-process-100-million-events/?replytocom=46767 (2016-04-29)
    Open archived version from archive

  • Using HDInsight & Storm to process 100 million events – Benny Michielsen
    breachedAccounts Count batchSize Context Logger Info Max reached time to purge FinishBatch else Context Logger Info Not yet time to purge This bolt does not process every tuple it receives immediately Instead it keeps a list of tuples ready for processing when either a maximum count is reached or a specific amount of time has passed since the last purge In the FinishBatch method a SqlBulk copy is performed to insert records in a SQL database private void FinishBatch lastRun DateTime UtcNow DataTable table new DataTable table Columns Add Email if breachedAccounts Any foreach var emailAddress in breachedAccounts var row table NewRow row Email emailAddress GetString 0 table Rows Add row try using SqlBulkCopy bulkCopy new SqlBulkCopy connection bulkCopy DestinationTableName Emails bulkCopy ColumnMappings Add Email Email bulkCopy WriteToServer table foreach var emailAddress in breachedAccounts ctx Ack emailAddress catch Exception ex foreach var emailAddress in breachedAccounts ctx Fail emailAddress Context Logger Error ex Message finally breachedAccounts Clear Both the table storage bolt and the SQL bulk copy bolt have lines with ctx Ack Fail or Emit These are places where you can indicate whether processing of a tuple has succeeded failed or you are emitting a new tuple to be processed further downstream It also enables to replay tuples If the SQL bulk copy fails the tuple will be processed again at a later time The azure table spout expects your bolts to ack topologyBuilder SetBolt TableStorageBolt TableStorageBolt Get new Dictionary string List Constants DEFAULT STREAM ID new List address 256 true DeclareCustomizedJavaSerializer javaSerializerInfo shuffleGrouping EventHubSpout topologyBuilder SetBolt SQLBolt SQLStorageBolt Get new Dictionary string List 2 true shuffleGrouping TableStorageBolt Constants DEFAULT STREAM ID addConfigurations new Dictionary string string topology tick tuple freq secs 1 topologyBuilder SetTopologyConfig new Dictionary string string topology workers partitionCount ToString The two bolts also need to be added to the topology The tableStorageBolt is configured with a name a method which can be used to create instances and a description of its output In this case I m emitting the email address to the default stream The parallelism hint is also configured and the boolean flag which indicates this bolt supports acking Since this bolt sits behind the Java spout we also need to configure how the data of the spout can be deserialized The bolt is also configured to use shuffleGrouping meaning that the load will be spread evenly among all instances The SQLBolt is configured accordingly with the additional configuration to support time ticks After I created an HDInsight Storm cluster on Azure I was able to publish the topology from within Visual Studio The cluster itself was created in 15 minutes It took quite some time to find the right parallelism for this topology but with the code I ve posted I got these results after one hour Table storage is slow the 2 SQL bolts are able to keep up with the 256 table storage bolts After one hour my topology was able to process only 10 million records Even though I was

    Original URL path: http://blog.bennymichielsen.be/2015/06/08/using-hdinsight-storm-to-process-100-million-events/?replytocom=46832 (2016-04-29)
    Open archived version from archive

  • IIS – Benny Michielsen
    binding configuration You can find many topics on this on the internet Below you can find a sample system serviceModel bindings basicHttpBinding binding maxReceivedMessageSize 5242880 readerQuotas binding basicHttpBinding bindings system serviceModel Unfortunately this did not solve the problem for me I spent quite some time to resolve the issue Turns out I also had to modify the IIS settings of the website that was hosting the WCF service The config setting is called uploadReadAheadSize and can be found in the serverRuntime section below system webServer and you can use the configuration editor feature of IIS manager to modify it Give it the same or a larger value you specify in your WCF configuration Author BennyM Posted on 22 03 2014 Categories Software Development Tags HTTPS IIS wcf 3 Comments on WCF HTTPS And Request Entity Too Large Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The Morning Brew Chris Alcock The Morning Brew 2052 on Getting up and running with Mono and Raspberry Pi 3 bhldev on Exposing iCal data in

    Original URL path: http://blog.bennymichielsen.be/tag/iis/ (2016-04-29)
    Open archived version from archive

  • wcf – Benny Michielsen
    amount of channels Where x equals the SimplePool s size So if the size of your pool is 5 and you request a 6th channel while the previous 5 are still busy doing their business you ll get a blocking thread until one of the previous five is returned to the pool This is good for when you want to limit the connectivity to your server but don t forget that it will block the calling thread when you request more channels than there are available The other pool mode is VariablePool This is a pool that grows and shrinks depending on the load So if you request 6 channels you ll get 6 channels which will be available for future requests when returned to the pool Channels which are no longer usable will be removed and new ones will be created when needed I spend most of the time trying to find a good away around my dependency on ChannelFactory Since that class has a method CreateChannel which takes no arguments and the factory can be configured using the endpoint name in your system servicemodel section The interface which the class implements doesn t have this The classes which populate a channelfactory from the app config are marked internal and so you can t use them The only solution I found that worked pretty ok was a wrapper interface which exposes the CreateChannel method it helped me to test the code without a channelfactory instance I did a lot of renaming in the codebase but on the consuming end not much has changed A SingleAction operating channel is still the easiest to configure object id MyService type WCFChannelManager ChannelManagerFactoryObject Perponcher WCFChannelManager property name ChannelType expression T Server IService1 Common property name EndpointConfigurationName value MyEndpoint object Creating a fixed pool is one extra line object id MyService type WCFChannelManager ChannelManagerFactoryObject Perponcher WCFChannelManager property name ChannelType expression T Server IService1 Common property name EndpointConfigurationName value MyEndpoint property name ChannelManagementMode value FixedPool object A variable pool means just another value as ChannelManagementMode object id MyService type WCFChannelManager ChannelManagerFactoryObject Perponcher WCFChannelManager property name ChannelType expression T Server IService1 Common property name EndpointConfigurationName value MyEndpoint property name ChannelManagementMode value VariablePool object If you want to use your own channel manager you can use the ProductTemplate to hook everything up The sample below illustrates this here the variable pool is configured via the template instead of using the ChannelManagementMode property of the FactoryObject object id MyService type WCFChannelManager ChannelManagerFactoryObject Perponcher WCFChannelManager property name ChannelType expression T Server IService1 Common property name EndpointConfigurationName value MyEndpoint property name ProductTemplate object property name ChannelManager object type WCFChannelManager ChannelPoolManager lt Server IService1 Perponcher WCFChannelManager constructor arg value MyEndpoint constructor arg object type WCFChannelManager AutoSizePoolFactory Perponcher WCFChannelManager constructor arg object property object property object The same approach can be used to configure the channelfactory for i e passing credentials object id MyService type WCFChannelManager ChannelManagerFactoryObject Perponcher WCFChannelManager property name ChannelType expression T Server IService1 Common property name EndpointConfigurationName value MyEndpoint

    Original URL path: http://blog.bennymichielsen.be/tag/wcf/ (2016-04-29)
    Open archived version from archive