David Revoy Author , 13 july , - Reply Merci! Good news that even you are approached by a book company, you stay straight in your shoes to not use proprietary standard instead of FLOSS standard! Many thanks for that! David Revoy Author , 13 july , - Reply Un grand merci! It's very interesting to see the industry working with open and free licenses. I hope all this helps you earning money from other works or more patrons.
David Revoy Author , 13 july , - Reply Thank you! Yes, I hope too it will bring more patrons to support my artworks :. Is there any change for a english or german comic version? If life where a computergame my language skill for french would be I've allways collected books and i just love the way books feel and smell.
I allready printed this PDF of yours in a copyshop but I hope we will make a good publishing with Krita Foundation. Scott Petrovic at Krita told me he was interested to redo all the book with Scribus ; it was last week. It's still very fresh! Thanks for your support :. That's great : Can't wait for having a Spanish version. TheFaico made a big work around the translation : Feel free to give him feedback about it.
We naturally start by creating a new file that will be named photo X. This is handled by the ImageEncodingProperties class. In our case, we will write Jpeg files and we will use the default values for the remaining properties. Final step is now to ask to the camera device to generate a snapshot. Associate this takePhoto function to the click event of the video element in the init function:.
Yes, you will need these cool guys if you want to build great Windows 8 apps. And last but not least, the splash screen will now be much more attractive than the default square provided with the new default project template. You have probably started to play with your new little camera application.
Please note that there is also a higher level object named CaptureCameraUI that could also match most of your usage cases.
Inkscape: Premier pas en dessin vectoriel (Accès libre) (French Edition) eBook: Nicolas Dufour, Elisa De Castro Guerra: rapyzure.tk: Kindle Store. Administrateur système de formation, Nicolas Dufour est un inconditionnel des logiciels libres et des formats ouverts. Tout d'abord simple utilisateur d'Inkscape, .
See you in the second tutorial where we will learn how to record videos, work a bit on the layout with the usage of the CSS Flexbox specification and how to add nice feedback animation to the user with CSS3 animations. So, continue the series here: Working on the layout, adding video recording support and providing feedback to the user with CSS3 animations.
Open default. As you can see, we have the 3 parts of the targeted layout. The control bar will have to be placed vertically on the right. The overlaid div will be displayed horizontally at the bottom of the screen. And last but not least, we have to support all possible resolutions using some fluid containers and best practices. We need also to style the default radio buttons using the red color when checked and more importantly using some vector symbols in their content to visually indicate their function. If you want to know more about that and find how to select the proper symbol in your code, you can read this article from Jonathan Antoine: Windows 8 Metro apps — a lot of icons are available out of the box!
In conclusion, our new layout will be generated by this CSS code to insert into your default. Right-click on default. Feel free to change some Flexbox values or Grid values to break the layout and learn those features. The code is very similar to the one already used for the Pictures folder.
Insert this code in the onactivated handler:. Now, we need some simple logic to check the current radio button state to decide if we need to call the code that will create a photo or a video. Again, the code is very similar to capturing a photo. VideoEncodingQuality Auto, p, p, etc. Press F5 to test that your application works. To better understand what I mean, have a look to this short video:. We need a HTML container first. This container should have an image tag. The source of the image tag will be dynamically provided during the snapshots done while using the application.
We could then use a WinJS control supporting the binding mechanism for that. We will then have to provide a binding object containing an url property. It takes an URL to a thumbnail image as a parameter. The animation job will be done by this piece of CSS:. The animation is very simple. Only 2 keyframes are defined. This animation is launched by the slide class affectation. Ok, the final step is now to call this previewSlide function and to provide it an URL to an image. In the takePhoto function, change the previous promise function by this new one:.
So, let say to the system it can garbage collect this memory as soon as possible. We now need to handle the video recording case. For that, replace the current stopRecording function by this one:. Our camera application will be even more fun to use.
So continue the series here: Using the FlipView to navigate through the live preview and the recorded files. To do that, you need to use a JS function as a template renderer that will return the appropriate piece of HTML for each case. In our case, we need 3 different templates for our FlipView control: a live preview, a photo and a video template. Template" attribute. The videoTemplate one is just using the video tag instead of the image one and add a layer on top of it with a play button. Both templates use the binding engine of WinJS for their source properties.
We need to instantiate a new binding list that will serve our FlipView control. This binding list will have 3 types of objects inside. Based on this type, our rendering function will choose the proper template to render and launch the appropriate code to initialize the various events handling and so on. This is just to help us knowing that associated to this object, we should render the live preview control. We now need the JS function acting as the item renderer:. This function will be called on each object inserted into the binding list. Once this asynchronous operation is finished, we can associate some code to this new HTML generated.
Like demonstrated in the video at the beginning at this article, if we slide from left to right, a new element will be displayed in the flipview. We then need to add items in the list bind to the control. It adds a new JS object to the list with the path to the URL of the photo or video just recorded, the value type image or video to help the renderer function choosing the right template and the URL to a thumbnail for the video in fact. We add them at the beginning of the list with the unshift function. Use the push function to add the new element at the end of the list.
We now need to call this function instead of calling the previewSlide directly into takePhoto and stopRecording. Add this case in the function then:. This function simply binds some code to the play button displayed in front of the video. We could have used also some pure CSS3 transitions or animations code for that. It generates some CSS3 on the fly for you. Press F5 to play with the application. It starts to be really cool to play with!
To reproduce the problem, launch the application, take for instance 2 photos, 1 video then 3 photos again. Navigate to the first element of the list sliding from left to right a couple of times and navigate back to the video element. You should normally see the play button but no more thumbnail image, just a black screen. So, what happened? Usually, it then loads and renders the elements by bunch of 3. This means also that it finishes by unloading some element once the view window is greater than 3. This means that once the template control is unloaded by the virtualization mechanism, the memory area dedicated to this blob URL could be garbage collected.
This is an easy fix but not a good practice for memory usage if your application does a lot of calls to createObjectURL. The second option means that you will have to read the stream back from the disk to pass it the createObjectURL function every time the virtualization process will instantiate the template control.
Using the first option should be fine for this tutorial. In conclusion, change all calls to the createObjectURL to set the boolean to false. Re-launch the application with F5 and check that this boolean modification has solved our problems. While navigating into the flipview, the camera live preview video stream could be paused by the system.
To fix that, in the startCamera function, add this line of code:. For that, insert this function:. First, you can see the video here:. A very simple example: Simple fluid grid layout sample using fraction units 2. Simple transition demo: go play with transitions on JSFiddle 2. Very advanced scenario combining all the above specifications to build a slideshow application: SnapyX. This application will take some time to run the first time as it will create a pre-filled DB with some images to let you have a first sample slideshow to play with. W3C specifications: IndexedDB.
Demo using File APIs to load a file and apply a sepia using workers or not: WebWorkers to filter image with sepia demo. W3C specifications: Workers. Very cool advanced demo: Browser Surface. W3C specifications: Pointer events. The kind of code you can also experiment with and easily use on your own site. First of all, many touch technologies are evolving on the web — for browser support you need to look the iOS touch event model and the W3C mouse event model in additional to at MSPointers , to support all browsers.
But of course, there could be also some cases where you want to address touch in a different manner than the default mouse behavior to provide a different UX. Moreover, thanks to the multi-touch screens, you can easily let the user rotate, zoom or pan some elements. Still, you can have some options:. Get a first level of experience by using the Windows 8 Simulator that ships with the free Visual Studio Express development tools. Have a look to this video also available in other formats at the end of the article. You can use BrowserStack for free for 3 months , courtesy of the Internet Explorer team on modern.
If you have a touch screen, try to interact with the canvas to check by yourself the current behavior :. But using touch instead, it will only draw a unique square at the exact position where you will tap the canvas element. The classic use case is when you have a map control in your page. Sample 1 : just after adding -ms-touch-action : none. Result : default browser panning disabled and MouseMove works but with 1 finger only. But you will quickly ask yourself this question: why does this code only track 1 finger? Then, how to handle multi-touch events?
You can now draw as many squares series as touch points your screen supports! This means for instance that you can use your mouse to draw some lines at the same time you are using your fingers to draw other lines. You need to replace the previous handler the paint function with this one:. Still, there is a problem with this code. In our case, you need to test this:. Beware that this only tells you if the current browser supports MSPointer.
To test support for touch or not, you need to check msMaxTouchPoints. In conclusion, to have a code supporting MSPointer in IE10 and falling back properly to mouse events in other browsers, you need a code like that:. Sample 3 : feature detecting msPointerEnabled to provide a fallback. Result : full complete experience in IE10 and default mouse events in other browsers. David wrote a cool little library that lets you write code leveraging the Pointer Events specification. Check out his article to discover and understand how it works.
Note that this polyfill will also be very useful then to support older browser with elegant fallbacks to mouse events. To have an idea on how to use this library, please have a look to this article: Creating an universal virtual touch joystick working for all Touch models thanks to Hand. JS which shows you how to write a virtual touch joystick using pointer events.
Note that this object is currently specific to IE10 and not part of the W3C submission. The base concept is to first register to MSPointerDown. The MSGesture object will then take all the pointers submitted as input parameters and will apply a gesture recognizer on top of them to provide some formatted data as output. The MSGesture object will do all the magic for you after that.
Once the element will be held, we will add some corners to indicate to the user he has currently selected this element. The corners will be generated by dynamically creating 4 div added on top of each corner of the image. Finally, some CSS tricks will use transformation and linear gradients in a smart way to obtain something like this:.
If so, add the corners. If not, remove them. Try to just tap or mouse click the element, nothing occurs. Release your finger, the corners disappear. This gives this code instead:. Finally, if you want to scale, translate or rotate an element, you simply need to write a very few lines of code. You need to first to register to the MSGestureChange event. This event will send you via the various attributes described in the MSGestureEvent object documentation like rotation , scale , translationX , translationY currently applied to your HTML element.
Even better, by default, the MSGesture object is providing an inertia algorithm for free. This means that you can take the HTML element and throw it on the screen using your fingers and the animation will be handled for you. Lastly, to reflect these changes sent by the MSGesture, you need to move the element accordingly.
The easiest way to do that is to apply some CSS Transformation mapping the rotation, scale, translation details matching your fingers gesture. Try to move and throw the image inside the black area with 1 or more fingers. Try also to scale or rotate the element with 2 or more fingers. The result is awesome and the code is very simple as all the complexity is being handled natively by IE You have then the opportunity to easily enhance the experience of your users in Internet Explorer Some of them are using HTML5 as a base in order to simplify multi-devices targeting.
This idea is to target the Pointer model and the library will propagate the touch events to all platforms specifics. Once I had all the technical pieces in hand, I was looking for a great way to implement a virtual touch joystick in my game. On another side, the virtual analogic pad are often not very well placed. The idea was then to take his code and refactor the touch part to target the Pointer model instead of the original WebKit Touch approach.
We will then use HandJS in this article to build our touch joystick. This sample helps you tracking the various inputs on the screen. It tracks and follow the various fingers pressing the canvas element. Thanks to Hand. Thanks to HandJS, write it once and it will run everywhere! If you have a touch screen, you can experience the same result by testing this page embedded in this iframe:. You should then be able to track at least 1 pointer with your mouse.
Well, I think the code is pretty straightforward. This collection is indexed by the id of the pointers. The collection object is described in Collection. The idea is to touch anywhere on the left side of the screen. Moving your finger will update the virtual touch pad and will move a simple spaceship. Touching the right side of the screen will display some red circles and those circles will generate some bullets getting out of the spaceship.
Note: the iPad seems to suffer from an unknown bug which prevents this second iframe to work correctly. Open the sample directly in another tab to make it works on your iPad. All the code lives this time in TouchControl. In conclusion, thanks to the job done by Seb Lee-Delisle and David Catuhe , you now have all the pieces needed to implement your own virtual touch joypad for your HTML5 games. The result will work on all touch devices supporting HTML5! The idea was to rebuild the famous StarWars intro with the 3D scrolling text and the slide down effect with a star destroyer ship coming at the end.
The most cool part was added at the end of the text scrolling. It lasts approximately seconds to run the complete demo. For instance, try the English version or the Chinese version. Hardware acceleration is definitely great for that! The HTML associated with this sequence is very very simple:.
The timing are the one currently set for the global sequence. You see for instance that the final fading only occurs after s. You can see also that the final ending black fading is simply done with an overlay black DIV getting from opacity 0 to 1. This specification is currently only supported by IE10 in the desktop world. But the experience is a bit better in IE10 as we have more control on the various resolutions thanks to the viewport rule. You have a 3 months offering thanks to our Modern. IE website. You should have a look to it! I hope that some of you will enjoy this funky experiment.
Cependant, vous avez quelques solutions alternatives :. Vous trouverez la signification de ces valeurs dans cet article : Guidelines for Building Touch-friendly Sites. Tout simplement car nous tombons actuellement dans le cas de dernier ressort justement. Il vous indiquera uniquement si le navigateur actuel supporte les MSPointers.
Douze exactement:. Pointers will aggregate those common properties and expose them in a similar way than the mouse events. Non vraiment j'insiste. Je vais suivre ce thread et pour la peine je vide mon sac. The code should be self explicit. TechDirt est un excellent blog technologique qui n'a pas sa langue dans sa poche. Mais il n'y a pas que le droit d'auteur.
Will it be playable or will the gameplay suffer too much? To answer to that question, I often use my own experience based on what I know and what worked well during my own tests. In the meantime, there were some obvious facts. We know also that combining SVG and Canvas is a good idea to write games that scale across devices but this could also impact the performance.
Moreover, even if GPU and hardware acceleration is available on mobile, their hardware architectures differ a lot from the PC and this impacts also a lot the performances. There is dozen of scenarios like that to address and to be aware of while writing HTML5 games for mobiles. But in which proportions? With my friend David Catuhe , we then decided to measure these various scenarios and build a benchmark framework to have a better idea on what to pay attention for. It took us time to validate it against tools like the ones that ship wit the Windows Performance Toolkit for instance.
But this will be an important article to read as we needed to rely on the stability of the framework for all the future benchmarks we will build on top of it. This article will illustrate how to use this tool and will concentrate on the number of sprites you can use in your game to maintain a good frame rate on all platforms. We will also discover interesting details on how GPU are actually being used during these scenarios. During my various discussions with some games developers, we were often wondering ourselves a list of recurring questions:.
Can I use a canvas of x only or can I scale up to x? We will then use the famous EaselJS framework from the CreateJS suite as I know it runs everywhere and has already been optimized for performance. The various benchmarks are described like the simplified code below to be able to be integrated in the HTML5 Potatoes Gaming Bench framework:.
You will need to build equivalent code to build your own benchmarks that will be monitored by the framework.
The key point is to add your benchmark definition into the monitored collection via the registerBench function. Thanks to Modern. IE , you can obtain a 3 months trial. A very useful tool you should have a look to. You can also run this benchmark in full screen or in a separate window via this link: prefixed sprites numbers series benchmark. At the end, you will have a summary of all benchmarks with a score. The score is simply the number of frames that your device was able to render in 20 seconds. The best logical score is then So it could sometimes occurs where the tick is calling us back just a bit above 16ms, that why you could have some cases where you will have like in the following screenshot:.
The good news is that all the modern desktop browsers have some really good hardware acceleration layers in place. Chrome has an overall score of Firefox 20 has an overall score of The most difficult test for it is to handle the sprites with shadows average of 6 fps vs 12 and 27 for IE10 and Chrome.
Except that, it has the exact same scores as IE So, what should we conclude at this stage? In this case, you just have to benchmark IE10 to determine your assets limits. But if we think about the web in general, we see that all browsers on can easily handle between and sprites at 60 fps on my desktop machine. But it already gives some interesting data.
There is a 5X to 10X performance difference between a mobile and a desktop device. So the next step now is to find what is the optimum number of sprites to maintain 60 fps for a good desktop experience and also 30 fps for a relatively good mobile experience. You can also run this benchmark in full screen or in a separate window via this link: launch 30 and 60 fps target series. This time the score provided at the end is the number of optimal sprites to maintain 30 or 60 fps. JS more details here: Creating an universal virtual touch joystick working for all Touch models thanks to Hand.
I was also curious about changing another parameter to confirm some of my thoughts. Indeed, all modern browsers are now heavily using them to offload the job needed to be done for the layout engine. I needed then to have the same machine with 2 GPUs available. Everything else should remain the same CPU, memory, hard drive, screen resolution, etc. Using Intel HD, the Vaio is able to display sprites 30fps and sprites 60fps. Using the nVidia GTm, the very same Vaio is able to display sprites 30fps and 60fps. Conclusion: you have 3 options here to build a cross-platforms game running fine everywhere.
A second option is then to build 2 versions of the game like 2 versions of your websites : 1 for mobile and 1 for phone. You will just adjust the graphical complexity based on the performance of each platform.
Or you can also take the optimal number on mobile to target 30 fps and you will be sure to run 60 fps on desktop. Lastly, the third and last option is to build a game engine that will itself dynamically adjust the complexity of the graphical engine based on the performance detected. Something that some game studios are frequently doing on PC for instance. It requires more work for sure and you need also to decide the kind of assets that will be displayed or not in your game without impacting the global gameplay.
I was convinced on my side that increasing the resolution would lower the average FPS for sure, even just to display some animated sprites. You can also run this benchmark in full screen or in a separate window via this link: launch various canvas resolutions series. On my machine, the various resolutions had 0 impact on the average framerate! This is also confirmed on my mobile WP8 devices and on Xbox Only Chrome has a lower average FPS only in x for an unknown reason.
This just means that for sprites animations, and with this specific benchmark, the resolution has no impact. The GPU seems to take the load without any problem even on mobile platforms. The Intel HD contains up to 12 scalar bit execution units where the nVidia GTm contains 48 bit execution units. There are also tools that help to check the load of the GPU. This is then a real good option to have in mind.
If the frame rate is too low due to a too high GPU load, try to render your canvas in a lower resolution and stretch it full screen using this hardware scaling method. There could be cases where the lost of frames will be due to CPU limited scenarios. To monitor that, you need profiler tools like the one embedded in most recent browsers. You will then see which parts of your code you should try to work on to regain some FPS.
Using the F12 tool included in IE10 to profile the code shows the following result:. So, you just need to deal with it! You can have a look to the set of tools available in the Windows Performance Toolkit and read the following methodology: Measuring Browser Performance with the Windows Performance Tools.
With David Catuhe , we had a precise idea why. We can see that this is the GC which is responsible for the drop of some frames. You know what I really like about this whole story? This is really what you should keep in mind to build your games that should scale across all HTML5 compatible devices. I really hope that this article and our benchmark framework will help you in your future HTML5 games. We will soon work on similar articles focused on other topics important for HTML5 games. In this list, you should then find your favorite language or at least something near your favorite one.
So why building a 3D soft engine?
When I was young, I was dreaming to be able to write such engines but I had the feeling it was too complex for me. You simply need someone that will help you understanding the underlying principles in a simple way. Through this series, you will learn how to project some 3D coordinates X, Y, Z associated to a point a vertex on a 2D screen, how to draw lines between each point, how to fill some triangles, to handle lights, materials and so on.
This first tutorial will simply show you how to display 8 points associated to a cube and how to move them in a virtual 3D world. There is a lot of good resources on the web that will explain those important principles better than I. Read at least up to slide Read those articles by not focusing on the technology associated like OpenGL or DirectX or on the concept of triangles you may have seen in the figures.
We will see that later on. All this magic is done by cumulating transformations through matrices operations. You should really be at least a bit familiar with those concepts before running through these tutorials. You should read them first. You will probably go back to those articles later on while writing your own version of this 3D soft engine.
Simply view it as a black box doing the right operations for you. So you should also succeed in doing so. We will then use libraries that will do the job for us: SharpDX , a managed wrapper on top of DirectX, for C developers and babylon. So if you want to use the C samples as-is, you need to install:.
Add a reference to those files in both cases. To do our rendering job, we need what we call a back buffer. Every cell of the array is mapped to a pixel on the screen. For every frame being rendered in the animation loop tick , this buffer will be affected to a WriteableBitmap acting as the source of a XAML image control that will be called the front buffer. The registration is done thanks to this line of code:. The canvas element has already a back buffer data array associated to it. You can access it through the getImageData and setImageData functions.