archive-be.com » BE » B » BENNYMICHIELSEN.BE

Total: 348

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • WCF HTTPS And Request Entity Too Large – Benny Michielsen
    not solve the problem for me I spent quite some time to resolve the issue Turns out I also had to modify the IIS settings of the website that was hosting the WCF service The config setting is called uploadReadAheadSize and can be found in the serverRuntime section below system webServer and you can use the configuration editor feature of IIS manager to modify it Give it the same or a larger value you specify in your WCF configuration Author BennyM Posted on 22 03 2014 Categories Software Development Tags HTTPS IIS wcf 3 thoughts on WCF HTTPS And Request Entity Too Large Marc Wils says 26 03 2014 at 15 58 Extra background information The uploadReadAheadSize modification is needed because the entire request entity body is preloaded during SSL negotiation For HTTP requests the WCF config setting should be enough to get things working See http blogs catapultsystems com rhutton archive 2012 07 22 request entity is too large over ssl in iis 7 aspx http www microsoft com technet prodtechnol WindowsServer2003 Library IIS 7e0d74d3 ca01 4d36 8ac7 6b2ca03fd383 mspx mfr true Reply Justin says 29 03 2016 at 21 07 There is some evidence that using uploadReadAheadSize to fix this issue causes CPU spikes Try this instead http stackoverflow com a 36292973 774600 Reply BennyM says 29 03 2016 at 22 29 Thanks for the info Reply Leave a Reply Cancel reply Your email address will not be published Required fields are marked Comment Name Email Website Post navigation Previous Previous post Model binding with Headers in ASP NET WebAPI Next Next post Global Azure Bootcamp 2015 Belgium Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on

    Original URL path: http://blog.bennymichielsen.be/2014/03/22/wcf-https-and-request-entity-too-large/ (2016-04-29)
    Open archived version from archive


  • Exposing iCal data in WebAPI – Benny Michielsen
    the iCal standard The only WebAPI specific code can be found in the constructor There we add the mapping for the headers we want the formatter to be invoked for After we add the formatter to the configuration object it will be invoked automatically whenever a client says it accepts text iCal The current setup works fine in Fiddler or when you use a custom client JavaScript or HttpClient But for a true end to end sample I want to use Outlook to connect to my appointment service Unfortunately Outlook does not send an accept header with text iCal when it requests resources from an internet calendar So we need to work around this problem Here another extensibility point of ASP NET Web API comes into play MessageHandlers MessageHandlers allow you to plug into the request processing pipeline on its lowest level You can inspect the request and response message and make changes In this case we can inspect the user agent that is added to the request when Outlook contacts our service When we find a match we will add an additional header to the incoming request We also add this message handler to the configuration object We now have everything in place to add an internet calendar in Outlook and view the appointments in our WebAPI Open Outlook Go to Calendar In the ribbon click on Open Calendar and then From Internet Fill in the url of the AppointmentService in WebAPI i e http localhost 58250 api appointments Click Ok You now have one AppointmentController serving json xml and iCal The complete source can be downloaded here Author BennyM Posted on 11 11 2013 Categories Software Development Tags iCal outlook WebAPI 2 thoughts on Exposing iCal data in WebAPI Pingback Modelbinding with Headers in ASP NET WebAPI Benny

    Original URL path: http://blog.bennymichielsen.be/2013/11/11/exposing-ical-data-in-webapi/ (2016-04-29)
    Open archived version from archive

  • March 2016 – Benny Michielsen
    so I was running the latest bits sudo apt get update sudo apt get upgrade Next up programming I briefly looked at the options I had I could program in C Python C and many others But time was limited this weekend I live and breath NET so I changed the list to NET Core or Mono I chose Mono because I had experimented with it years ago and NET Core has not yet reached a stable point Toying around with alpha and beta releases was not on my todo list for today The default package repository has a very old version of Mono so you need to follow the instructions on the Mono site Add the signing key and package repository to your system then run sudo apt get install mono runtime sudo apt key adv keyserver hkp keyserver ubuntu com 80 recv keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF echo deb http download mono project com repo debian wheezy main sudo tee etc apt sources list d mono xamarin list sudo apt get update sudo apt get install mono runtime I created a Hello World console application on my Windows 10 laptop and used PSFTP to copy the exe to the Pi It just worked Then the search was on to find a library to interface with the GPIO pins on the Pi After looking around I found Raspberry Sharp IO It had the API I wanted You can use the event model to track changes in the GPIO pins just what I needed var pin2Sensor ConnectorPin P1Pin11 Input GpioConnection connection new GpioConnection pin2Sensor connection PinStatusChanged sender statusArgs Console WriteLine Pin changed statusArgs Configuration Name Deploying this to the Pi however resulted in catastrophic failure Some weird error message pi raspberrypi ticktack sudo mono TickTackConsole exe Missing method ctor in assembly home pi ticktack Raspberry IO GeneralPurpose dll type System Runtime CompilerServices ExtensionAttribute Can t find custom attr constructor image home pi ticktack Raspberry IO GeneralPurpose dll mtoken 0x0a000014 Assertion at class c 5597 condition mono loader get last error not met Stacktrace Native stacktrace Debug info from gdb New LWP 1965 Thread debugging using libthread db enabled Using host libthread db library lib arm linux gnueabihf libthread db so 1 0x76e67ee8 in libc waitpid Cannot access memory at address 0x1 pid 1966 stat loc 0x7e904960 options 0 at sysdeps unix sysv linux waitpid c 40 40 sysdeps unix sysv linux waitpid c No such file or directory Id Target Id Frame 2 Thread 0x769f3430 LWP 1965 mono 0x76e65a40 in do futex wait isem isem entry 0x3181a4 at nptl sysdeps unix sysv linux sem wait c 48 1 Thread 0x76f5e000 LWP 1961 mono 0x76e67ee8 in libc waitpid Cannot access memory at address 0x1 pid 1966 stat loc 0x7e904960 options 0 at sysdeps unix sysv linux waitpid c 40 Thread 2 Thread 0x769f3430 LWP 1965 0 0x76e65a40 in do futex wait isem isem entry 0x3181a4 at nptl sysdeps unix sysv linux sem wait c 48 1 0x76e65af4 in new sem wait sem 0x3181a4 at

    Original URL path: http://blog.bennymichielsen.be/2016/03/ (2016-04-29)
    Open archived version from archive

  • February 2016 – Benny Michielsen
    to write this down I had to reproduce the issue So I removed the attribute again but the build somehow kept building I had also deleted the failed builds so I have no log anymore to illustrate the issue Something has changed though as the builds now take much longer than before even without the force attribute You can see that in the image below Hopefully I don t run into it again I added the force attribute after reading this GitHub issue Author BennyM Posted on 29 02 2016 29 02 2016 Categories Software Development Tags build vso Leave a comment on Running Gulp and NPM in VSO Build fails with Run as Administrator message Session Storm with HDInsight Two weeks ago I spoke at the Belgian Azure user group AZUG I gave an introduction on Storm with HDInsight You can find a recording of the session on their website Second azugbe talk of tonight Apache Storm on Azure Data Lake HDInsight by bennymichielsen pic twitter com o0WzMbYgMh Tom Kerkhove TomKerkhove February 4 2016 My talk was divided in three parts a introduction a deep dive to give an overview of the main concepts of a Storm topology and then several scenario s and how they can be solved The deep dive centered around creating a Twitter battle where hashtags were counted and the results then displayed on a website You can find the code on my GitHub account Author BennyM Posted on 20 02 2016 Categories Software Development Tags hdinsight session storm Leave a comment on Session Storm with HDInsight Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The

    Original URL path: http://blog.bennymichielsen.be/2016/02/ (2016-04-29)
    Open archived version from archive

  • August 2015 – Benny Michielsen
    84 GB of event storage The default value is 1 TU In the picture below you can see that I had one cloud service pushing messages to an event hub until 10 00 I then scaled out the service to 20 instances This resulted in about twice the amount of messages being sent from 200k to 400k not really what you expect I was also getting more errors from time to time the event hub was sending back server busy messages At about 10 30 I increased the TU from 1 to 3 this not only stopped the errors from occurring but further increased the throughput from 400k to over 1 million messages being received on the event hub per 5 minutes Author BennyM Posted on 11 08 2015 12 08 2015 Categories Software Development Tags azure cloud event hub 3 Comments on Scaling an Azure Event Hub Throughput units Search Search for Search Follow me Recent Comments Rick on Partitioning and wildcards in an Azure Data Factory pipeline BennyM on WCF HTTPS And Request Entity Too Large Justin on WCF HTTPS And Request Entity Too Large The Morning Brew Chris Alcock The Morning Brew 2052 on Getting up and

    Original URL path: http://blog.bennymichielsen.be/2015/08/ (2016-04-29)
    Open archived version from archive

  • July 2015 – Benny Michielsen
    desired output I also tested what happens when a file is in use an exception So make sure the application that creates the files you want to copy releases any locks Wildcards can be used in the filename and the filter so you can copy all txt files in a folder with txt Unfortunately wildcards and partition placeholders can not be combined So if your files are all being stored in one folder but each file has the time in its filename myFile 2015 07 01 txt you can t create a filter with the dynamic partitions myFile Year Month Day txt It s only possible to use the partitionedBy section in the folder structure as shown above If you think this is a nice feature go vote here The price of the current setup is determined by a couple of things First we have a low frequency activity that s an activity that runs daily or less The first 5 are free so we have 25 activities remaining The pricing of an activity is determined on the place where it occurs on premise or in the cloud I m assuming here it s an on premise activity since the files are not located in Azure I ve asked around if this assumption is correct but don t have a response yet The pricing of an on premise activity is 0 5586 per activity So that would mean almost 14 for this daily snapshot each month If we modified everything to run hourly we d have to pay 554 80 per month You can find more info on the pricing on their website In this scenario I ve demonstrated how to get started with Azure Data Factory The real power however lies in the transformation steps which you can add

    Original URL path: http://blog.bennymichielsen.be/2015/07/ (2016-04-29)
    Open archived version from archive

  • June 2015 – Benny Michielsen
    could create another cloud service which uses the NET SDK to read the data There are two ways to implement this either with the EventHubReceiver class or with the EventProcessorHost I ll try these out in future post For now I wanted to use something else HDInsight Storm HDInsight is Hadoop as PaaS Storm is a computation system to process streams of data and as advertised on their website should be able to process millions of tuples per second Sounds like a good way to handle the 100 million e mail addresses I have waiting In order to use Storm you need to understand only a few concepts Tuple a piece of information that needs to be processed Spout reads from a stream and returns tuples Bolt processes tuples and can forward or produce new tuples Topology links spouts and bolts together This is a very rudimentary explanation but should be enough for this post In order to write and publish a topology in Visual Studio you should have the Azure SDK and the HDInsight tools for Visual Studio Let s look at the different components we need The spout will be reading from our event hub Microsoft has already written a spout which you can use However it is written in Java Storm is not bound to one language a core feature and with HDInsight you can have hybrid topologies Any work you have already invested in Storm can be reused Java and C spouts and bolts can even work together Let s look at the configuration that we need to get our spout up and running TopologyBuilder topologyBuilder new TopologyBuilder EmailProcessingTopology int partitionCount Properties Settings Default EventHubPartitionCount JavaComponentConstructor constructor JavaComponentConstructor CreateFromClojureExpr String Format com microsoft eventhubs spout EventHubSpout com microsoft eventhubs spout EventHubSpoutConfig 0 1 2 3 4 5 Properties Settings Default EventHubPolicyName Properties Settings Default EventHubPolicyKey Properties Settings Default EventHubNamespace Properties Settings Default EventHubName partitionCount topologyBuilder SetJavaSpout EventHubSpout constructor partitionCount This code can be found in the samples of the HDInsight team and is pretty straightforward Create an eventhubspout and add it to the topology The partitionCount indicates how many executors and tasks there should be and i t s suggested that this should match the amount of partitions of your event hub It gets more interesting in the first bolt A bolt is a class that implements ISCPBolt This interface has one method Execute which receives a tuple public void Execute SCPTuple tuple string emailAddress string tuple GetValue 0 if string IsNullOrWhiteSpace emailAddress JObject eventData JObject Parse emailAddress try var address new System Net Mail MailAddress string eventData address var leakedIn string eventData leak var breachedAccount new BreachedAccount address leakedIn var retrieveOperation TableOperation Retrieve breachedAccount PartitionKey breachedAccount RowKey var retrievedResult table Execute retrieveOperation var existingAccount Device retrievedResult Result if existingAccount null TableOperation insertOperation TableOperation Insert new BreachedAccount address leakedIn table Execute insertOperation ctx Emit Constants DEFAULT STREAM ID new tuple new Values address Address else existingAccount BreachedIn breachedAccount BreachedIn TableOperation insertOperation TableOperation Replace existingAccount table Execute

    Original URL path: http://blog.bennymichielsen.be/2015/06/ (2016-04-29)
    Open archived version from archive

  • May 2015 – Benny Michielsen
    het HTTP protocol Het komt erop neer dat er een hoop tekst heen en weer wordt gestuurd Navigeer je naar een website met je browser dan wordt er een GET verzoek naar een bepaalde url gestuurd Wanneer je een formulier invult op een website doet je browser normaal gezien een POST verzoek Ik kan bijvoorbeeld surfen naar de website tweakers net Wat er achter de schermen allemaal gebeurt kan je zien door in je browser een keer op de toets F12 te drukken Wat er dan tevoorschijn komt is de developer console die wordt gebruikt door ontwikkelaars wanneer ze een website bouwen of een probleem moeten oplossen Je kan deze ook zelf gebruiken om te leren hoe het allemaal werkt In de screenshot zie je onderaan de technische informatie Het eerste verzoek dat mijn browser doet is een GET verzoek naar de url tweakers net De computer waar de website op draait krijgt dit verzoek binnen en zal een hoop tekst terugsturen Je browser zal dit dan interpreteren en je krijgt een website te zien Al deze tekst wordt op een leesbare manier doorgestuurd wat op zich geen probleem is Soms is er echter informatie die je niet zomaar als leesbare tekst wilt versturen bijvoorbeeld wanneer je een wachtwoord of kredietkaartnummer moet invullen Ook dit kunnen we nakijken op de website Wanneer je op inloggen klikt word je doorgestuurd naar een andere pagina In de adresbalk kunnen we zien dat we niet meer HTTP gebruiken maar HTTPS De ontwikkelaars van de website hebben ervoor gekozen om HTTPS te gebruiken De S staat voor secure en zolang dat er bijstaat worden de door jou ingevulde en verstuurde gegevens versleuteld Ook de gegevens die de website naar jou stuurt worden geëncrypteerd Andere personen kunnen dus niet meer meekijken Ligt al je internetverkeer dan zomaar op straat Eigenlijk wel maar wanneer je thuis op je eigen netwerk zit is de kans klein dat er mensen meekijken Ben je echter in een restaurant station of op een andere publieke plaats waar er gratis WiFi wordt aangeboden dan bevind je je eigenlijk wel in een potentiële jungle Met een tool zoals Wireshark kan je al het netwerkverkeer bedraad en draadloos inkijken Als het iets mag kosten dan kan je ook een WiFi Pineapple kopen waarmee een man in the middle aanval kinderspel is zeker in deze tijd van smartphones Genoeg theorie laten we een keer kijken naar enkele Vlaamse media websites Knack Op de Knack site staat bovenaan een Aanmelden link Wanneer je verder klikt krijg je een popup die je inloggegevens vraagt Op het eerste zicht werkt het niet over HTTPS we moeten naar de ontwikkelaarstools van de browser gaan om dit te achterhalen Daar zien we gelukkig dat de inhoud van deze popup wel via HTTPS wordt geladen Bij het invullen van een gebruikersnaam en wachtwoord wordt ook alles netjes via HTTPS verstuurd Het wachtwoord wordt zelfs niet gewoon als tekst verstuurd Interessant De computers van Knack ontvangen een MD5 hash van mijn

    Original URL path: http://blog.bennymichielsen.be/2015/05/ (2016-04-29)
    Open archived version from archive