Codesmith Speaker Event: Understanding TensorFlowJS

Join us for a special talk about Data Science & Machine Learning. This DSML speaker series is led by Matthew Soulanille who dives into TensorflowJS.

TensorflowJS is a deep learning tool that brings the power of neural networks to the browser by allowing you to build, train and deploy state-of-the-art deep learning frameworks by working in pure javascript.

Matthew Soulanille, who works on the Tensorflow development team at Google and is one of the core developers behind the project, will cover the essentials of TensorflowJS, its main concepts, how to use it, and its most promising professional use cases.

Continued Learning Resources:


This video is a recording of a live event. To follow along live and participate in upcoming workshops and events, RSVP at

SUBSCRIBE for more! 👉 👈

Stay connected to our community!
Learn JavaScript for Free:
Free Events & Workshops:

#coding #datascience #machinelearning #softwareengineer

[Music] Thank you I'm Matthew selenil I'm a software Engineer on the tensorflow.js team at Google I'm here to talk to you about Tensorflow.js so Why should you use tensorflow.js Well it has no installation requirements It runs in the browser which everyone Has there's no packages or deliverables That need to be installed it's just JavaScript over the web like everyone's Used to it's interactive it's very easy To access sensors like the webcam and Microphone it has very good privacy Qualities all data stays on the clients There's nothing that sentence that gets Sent back to the server unless you Specifically want to do that but but you Don't have to Latency is very good as well since Everything is running on the client There's no ping time to go back to the Server and then back to the client with The inference results And Um it's JavaScript native so everyone Who's familiar with web development and JavaScript will find it very easy to get Started with tfjs And so what's tensorflow.js for well it Allows you to create pretty much Anything you can dream of in the machine Learning space so let's take a look at

What some people have been creating These are just like examples from users Who have been using tfjs in their own Applications So here uh in space use real-time Toxicity filters in their web Conferencing app and you can see when a User enters something uh bad it's it's Flagged before it's even sent to the Server for processing And it alerts the users that they might Want to reconsider what they're about to Send Um next we have uh include Health who's Been using our post estimation models to Enable physiotherapy at scale Well many folks were stuck at home During covid and unable to travel Remotely this technology allowed for Remote diagnosis from the comfort of Their home using off-the-shelf Technology just a standard webcam that Everyone has access to well almost Everyone has access to Um uh or how about enhancing the Capabilities of a typical fashion Website here we have a body segmentation Model with some custom logic to estimate Body measurements and allow the website To automatically select the correct size T-shirt at checkout and this was made in Under two days using our pre-made body Segmentation model You could also bring a character to Life

By combining multiple models in this Demo there's pose estimation for getting The arm poses and a face mesh for the Facial expressions Uh now traditionally creating animation Is a hard problem but this pose Animation tool by our by partner Innovation here at Google allows you to Draw any SVG character you'd like and uh Give animations to them in like uh with Just with motion capture in in real time Uh and you can take this even further uh With uh 3D characters uh like this great Demo by Richard Yee from our community And this allows you to use your favorite Character as your virtual avatar for Meetings or live streams enabling Real-time motion capture to drive Virtual artists in the browser and this Of course just uses a regular webcam all Running in JavaScript Um looking into other creative Applications you can turn your voice Into a musical instrument using Google's Magenta tone transfer model Um that runs in the browser as well Through tfjs you can sing any song or Chain you wish and hear it being played By an instrument of your choice Um Or what if you could make the world more Accessible here at the partner Innovation team at Google uh sorry here The partner Innovation team at Google

Use tensorflow.js to understand sign Language gestures This project is built using a Combination of machine learning model Outputs for face Hands and Body to Account for the fact that sign language Is communicated not only by the signage Hands but also with their facial Expressions and overall body position And posture Currently the demo has been trained on JSL and hksl phrases but it could scale To any sign language dialect even better This Library can be used for any sort of Gesture recognition even Beyond sign Language and it is generalizable to any Human computer interaction use case Which opens up possibilities for Web-based Creative experiences fitness Health and even human computer Interaction research And everything you just saw was powered Using pre-made models You didn't have to do any machine Learning modeling or training to get These effects Um just using the standard models that That we have available Um So pre-chain models let's take a look at Some of the ones we have available These are easy to use JavaScript classes For common use cases and are great for People new to machine learning or even

People that are new to JavaScript and I'm looking to understand how to work With the websack Uh since launch we've released many Pre-trained models and we're currently Expanding our selection which we'll hear About more shortly We have models across many categories Such as Vision body text sound And and these just take a few lines of Code to use if you want to check these Out you can go to JS Models to see all of them And yeah even better you don't do the Machine learning background to use any Of these just a working knowledge of JavaScript is is useful So Um let's take a look at some of these in Action Um First up we have object recognition here We're able to run the popular Coco SSD Model live in the browser to provide Bounding boxes for 90 common objects That the model has been trained on This allows us to not only understand Where in the image the object is located But also how many instance of instances Of the object are in the image which is More powerful than just image Recognition alone Next we also host a 3D pose estimation Model in collaboration with media pipe

It was a which is another research team At Google uh and this can estimate 33 Key points on the human body in real Time This model has three forms a light form A heavy form and uh yeah a light full And heavy version and these allow you to Select uh between a trade-off of Accuracy and speed Uh the light mile for example can run at 150 frames per second on a modern Desktop computer with a discrete Graphics card a bit slower on laptops But still very performance Um Continuing our media pipe collaboration We've also released a body segmentation Model and here you can now retrieve the Segmentation mask as well In addition to the key points Um And this this shows which pixels belong To the human body versus those that do Not you can see in the red and blue Animation up there Um yeah and like I said this expands on The pose estimation model that you saw Before so you can get both of these Results with minimal overhead Um Yeah and this this model is now part of Our unified pose API Um linked on the slide uh that that Shortened Google link right there

Um so if you need both pose and body Segmentation return at the same time go Check it out Um Now related to this there's also a Version of this model that's optimized For selfie segmentation and this allows You to create effects like the one that I'm sure all of you have been using Throughout the zoom call where you can Like replace your background with a Different background or or make add a Bokeh effect stuff like that Um and this yeah this model is actually A modified version of the one we use Here internally at Google uh in Google Meet for Uh yeah for for conferencing uh and this This version of the model is better Suited to close-ups as you can see here Uh where the user is less than like two Meters from the camera and it can run Pretty quickly as well uh about 200 FPS On a modern computer uh with a with a Graphics card Um yeah and you can check it out at the The link shown Um more stuff related to uh human bodies Uh moving on to the face this time we Have updated our face Slammer detection API that builds on the existing face Mesh model that you may have tried Before Um and this new API has even higher

Accuracy for detail for face landmarks And uh movements and it's also based on The latest media type model And you can see in this demo it can run At over 130 FPS on Modern computers uh Yeah and you can try it out at the link Below uh this this frame rate though is Probably from like a computer with a Rather beefy GPU so you might see slower Frame rate on your laptop But yeah there's also a very lightweight Option that just returns bounding boxes And six key points for the face if you Have a less powerful device like a like A smartphone or something like that And for a more in-depth look at what can Be done with faces we now have the Selfie depth estimation model and this Is capable of estimating the distance From each pixel in the camera as uh like Or sorry this is yeah capable of Estimating the distance of each pixel on The on the scene from the the camera That's taking the picture Um and you can see this uh maybe I can Oops sorry Laser pointer mode Yeah and you can see this uh You can see this death map depth map Right here uh first the model will Segment using one of the image Segmentation models I discussed before And then it will apply the depth map to Get the uh the depth map of just the

Person's face And you can do some pretty cool things With this such as 3D photos uh you can See it might not be showing up that Great on Zoom but it sort of like Wobbles around the person's face and Gives you kind of an idea of Uh what a 3D version of them would look Like Um and you can also do things like Imagery lighting here Um the user is able to use 3.js which is A 3D Graphics Library along with some Custom shaders to cast light rays around The person in the scene Since we have the depth information from The that depth model Next let's turn our Focus to hands uh Some updates to our hands model now Allow tracking at multiple hands at the Same time in 3D or 2D with higher Precision than before Um on a monitor desktop computer you can Get about 120 FPS with this kind of Model and yeah this uh like I mentioned Before with the uh sign language Um interpretation this these kinds of Models can be very useful along with the Pose estimation for uh combining and Forming something that can really Provide A cool thing that you might not Necessarily be able to get with them on Their own

So that brings us to the question that I'm sure you're all well asking what do These apis actually look like and like How how are they to use So as an example let's let's kind of Shift away from video and and talk about Uh text processing Um and I I promise the video apis are Also similarly easy to use it's just Videos a bit difficult in general Um so there's a kind of a difficulty Floor there they all have to be at least As hard as video is difficult but yeah So for this example we'll be using uh Our our Bert q a model which Um you can give it a large body of text And then a query Like When was Nikola Tesla born for example And then you gave it his Wikipedia Article and then it will find like parts Of that text that it thinks are relevant To your question Um so in this example we're going to try To build a uh Chrome extension where you Can just Type in a question and it will find the Relevant part of the article that Corresponds to that question Um and this just takes a few lines of Code in in the core part of it there's There's more setup required for the Actual extension but the actual uh Actually using the brick model is

Relatively simple uh in fact the code Fits on a single slide so let's walk Through it First we import the tensorflow.js Library And the pre-pre-made model that we want To use which is this Burke q a model a Question and answer model Um Then we just set up some text that we're Going to to query and and the question That we're going to ask Um normally you would be scraping this From the website but for this example uh We're just going to make up some text Um and afterwards we'll load the Question and answer model itself This takes some time to load since it's Doing a network request so it's it's a It returns a promise hence the dot then Uh you give it a callback which will be Called with the model And then you can Uh You can do model dot financers Um we pass this function the the Question we want to answer and then the Search text we want to find the answer In this is also asynchronous so it Returns a promise so we do dot then Answers uh and this this will be called With the result of the model Um this will take a few milliseconds and Um in this case it would predict cats

Because What is important on YouTube well of Course cats and that's that's pretty Much all there is to it Um front on on the model usage side of Course there's a bit more real uh Involvement with actually creating a Browser extension but uh the API here is Uh relatively simple when it comes to Finding an answer to a question of Arbitrary text Um so that really covers uh And our other models are similarly easy To use Um for some of the vision ones or the The image based ones you can usually Give them a document elements or like an Image tag or a video tag and they can uh They can run on that and provide you With like uh classifications or key Points or things like that So what if there isn't a pre-made model That that works for you well we also Support transfer learning where you take An existing model and then retrain the Last layer of it to do something Slightly different Um and our our go-to example for this is Teachable machine Um if you haven't seen this website Definitely take a look after the Presentation you can Take a few different images Uh from like for one class like up here

You can see uh maybe you can see but in This example he has his hand up Uh and then class number two he's just His face And Um yeah you can just train the model in The browser it this is based on uh I Think mobilenet V3 and it just retrains The last layer of it and you can um you Can just build your own uh model just in The browser And this is great for prototyping ideas And getting a feel for how to use Tensorflow.js this can export to TF Lights and tfjs And I think it might also be able to Export to Keras although I'm not totally Sure on that Um oh I think it some versions uh some Of the models it has can also export to TF micro if you're interested on running On microprocessors Um So uh still a Um outside uh some cases can't be Covered by all that so outside of our Pre-made models you can of course write Your own models uh from a completely Blank canvas Um So we have this API website Um It's uh has Uh some stuff that you can you can you

Can see our our full API here but I'll Give a quick overview of how this works Um there's there's two uh two sort of Different apis that we have there's a High level layers API which is very Similar to Keras if you've used that Before and there's also a low level Ops API which is more like a linear algebra Library Um And these are sort of built on top of Each other so at the top we have our Pre-made models Um some of these are built on top of Layers some of these are built directly On top of the core Ops or or we also Have a graph API for if you convert a Normal model a normal tensorflow model Um then it will kind of skip the layers And just go directly to core Ops Um Then Uh there's these these Ops will run on The clients and this is compatible with Many different environments Um like the browser uh react native Things like that And each one of these environments can Execute on a number of different Backends Now by back end here I don't Mean server side back end in this Context refers to the actual Hardware That it's running on Um so and I'll I'll get into some more

Details on that in a bit Um Oh actually right now sorry Um so what backends do we have available Uh we've got the CPU backends and this This is always available and it's the Slowest form of execution this just runs Each machine learning op As JavaScript so it's pretty much Guaranteed to be available if you're Able to run tensorflow.js you're able to Run the CPU backend Um now you also have webassembly and This is CPU execution but faster it has Support for single instruction multiple Data and in some cases you can also get Multi-threading working Um smaller models can run extremely fast On the CPU so this can be very good for Those cases sometimes it can take a long Time to copy textures to and from the GPU Um yeah and in some cases this can even Match GPU performance Um and then of course you also have uh Kind of like I mentioned GPU you've got The webgl back end And this leverages the graphics card uh And supports 96.7 of devices uh there's A few that that don't work yet but but We're trying to get that we're trying to Close that Gap Um and yeah that means you can run Um more just more than just Nvidia

Graphics cards Um anything that supports webgl which is I think pretty much Any modern smartphone Computer anything like that Should support our tfjs webgl back end Um Yeah and this includes like AMG cards And uh of course Intel integrated cards although they Have some discrete ones these days and Also apple and yeah looking towards the Future these there's some really Exciting new uh apis and web standards So there's webinim which is Um Kind of more focused on bringing uh Native apis for machine learning Bringing access to them to the browser So you could uh maybe send a graph of Machine learning operations to be Executed to the webinar API and the Input and then it could send you the Result of executing that graph on like Native C execution instead of in Webassembly or the CPU Then there's also web GPU which is uh Kind of a successor to webgl in a sense It provides a much deeper access to the User's graphics card Um and it's much better for Um for the the compute kinds of Applications that are involved with Machine learning

Um and this one is actually I think Almost ready for uh for uh for just General availability I I don't know Exactly when but I think the the Chrome Team is planning on flipping the bit to Enable it in Chrome uh sometime this Year I'm not I'm not completely sure That that could of course change but but Yeah once that happens Um We've been seeing some really good Performance on on this back end Um and I also want to uh I want to give A shout out to Intel for the really Really good work they've been doing here They've they've contributed so many Ops To this this back end Um yeah we probably wouldn't be nearly As close to uh full coverage with it Without their support So that's the website uh let's take a Look at the the server side because you Can also use tfjs in node Um note here that our node.js Environment supports the same tensorflow CPU and GPU bindings as python these Things right here are actually just the Native C bindings to tensorflow so all Of the op implementations and everything Here Can get the same or similar performance As uh what you'd get in Python because It's using the same back end essentially Um yeah which means you can get Cuda ABX

Acceleration and and yeah everything You'd expect with python uh in terms of Running inference in models Um there's also some advantages you get From using node.js instead of python as A server which is that node tends to be A bit faster in some cases because it Has a Justin Time compiler if you're Doing a lot of data pre-processing You might find some performance benefits From node Um Let's see Um Yeah and um Um Sorry I think I may have just set the entirety Of this slide on the previous one Um yeah so some uh some example of of That that we've seen Um So what's our performance like here as You can see here executing this Mobilenet V2 model which is an image Recognition model on both Python and Node with Kudo GPU acceleration leads to About a millisecond of difference this Is very similar execution time Um however if you do a lot of Pre-processing like I said Um This this might be uh an advantage for Node

Um for example uh the company hugging Face who are well known for their Natural language models converted their Distill Albert model Implementation to node.js and for them It resulted in a 2X performance boost Over the python equivalent Um so yeah if performance is top of mind You may want to give node a chance Especially if your goal is to expose a Server-side web API for inference which Notice well suited for very easy to Develop and deploy Appointment Um yeah in summary here are some of the Benefits I've known on the server side First you can use the tensorflow saved Model format without converting it to Tensorflow.js Normally in tensorflow.js if you want to Run a normal tensorflow model you'll Have to run it through the tfjs Converter first And this lets you run or not having to Do that lets you run and also not having To ship it to the user's device lets you Run much larger models Um next uh you can code in just one Language Um if you were running TF and Python and Trying to serve stuff in node Um this removes that requirement you can Do your whole server in node.js and Uh yeah finally performance uh just the

The jit compiler is very good at at Getting high performance from Pre-processing and post-processing code Of course there's ways too Excuse me of course there's ways to do That in Python 2 if you just keep Everything in numpy and and use the C Bindings but it can be easy to do this In in node as well Foreign Yeah and finally let's talk a bit more About the client side and Um and what you can achieve there just To kind of reiterate the everything I've Been talking about so far the the Yeah the first benefit here is really Privacy Um there's no information stored on the Server everything is in the client Machine Um this is particularly important for Industries where it may be a requirement To not transfer data to third parties Uh additionally there's growing concerns About privacy these days and here you Get privacy for free uh with the tfjs Um like I mentioned at the beginning Lower latency JS has direct access to Device sensors such as camera microphone You don't have to do a round trip to the Server latency could be as close at 100 Milliseconds on mobile connection over 3G or 4G Um so that can really hinder model

Performance or or just quality of the The experience but with tfjs there's There's none of this it I mean there's of course there's some Other things you have to worry about Like uh battery life and things like That but you don't have to worry about Latency Um yeah there's also no data being sent To the server so you don't have to pay Extra networking costs you don't have to Buy servers with gpus Um and yeah Interactivity web is really designed for The display of content and from day one It supported graphical content that has Evolved to handle richer and richer Formats like uh webgl things like that And as such it has very mature libraries For graphics and data visualization like 3js and d3.js And this allows you to code your ideas Quickly which is very important for Developing new models and and just Getting feedback on them And Yeah finally reach and scale anyone who Can click on the link and load the Webpage should be able to see how your Model works xero and solve required and Yeah you can get more eyes on the cool Projects and models you're working on So finally I'd like to uh summarize some Resources that you can go use to go

Further Um for more Hands-On step-by-step Instructions check out our code Labs at and you Can simply search tensorflow.js or Tensorflow.js on this site to find a Whole bunch of useful step-by-step Guides Um If there's one slide to take a Screenshot of this is probably the one Here you can find our website for more Details on the library or check out some Of our pre-trained models you can also Be our open source code on GitHub and Even make a contribution if you if you Want I know some of you already have and Um yeah you can check out the official Tensorflow Forum as well to ask more Technical questions and remember to tag Your post as tfjs so we can find it uh We'll also have some code examples that You can fork and uh over at And and these can provide Some boilerplate code to get you started Um and of course for inspiration check Out our short 10 minute interviews on YouTube Via our made with tensorflow.js series Um yeah for for the reason uh sorry for Further reading I recommend checking out These two books on the right Um the first is by O'Reilly and it's a Really is a good easy to follow

Introduction to machine learning and Tensorflow.js the second is by Manning And was written by folk on the Tensorflow a tensorflow team here at Google and it goes a little deeper for For those interested Um let's see and yeah come join our Growing Community there are too many Examples to show here and there's There's more coming out every single Week Um so you can check out the made with Tfjs hashtag on social media like Twitter and Linkedin to see what people Are making around the world and yeah if You make something uh be sure to tag it Too and you might be featured in one of The events that uh Jason our developer Relations engineer hosts Um So the only thing left to say now is What will you make my final example here Is by a member of our community from Tokyo Japan He's a Dancer by day but he has managed To use tensional.js to create an amazing Hip-hop video with some pretty cool Visual effects using body segmentation Now the reason I end with this is that Machine learning really now is for Everyone no matter what your background Is and I hope tensorflow.js can be an Even can enable more people than ever Before

Um to find it And start their Journey with machine Learning Um Feel free to connect with me stay in Touch I'm Matthew underscore soleno on The tensorflow Forum or you can also Check out Um the Discord server Um I think I'm Matt s there or something Like that Um and yeah you can also reach out to Jason Mays he's our developer relations Engineer and he he's done so many cool Demos and really some really cool uh YouTube videos and talks on tfjs you can Just search made with tensorflow.js on YouTube and you'll find a lot of the Content he's made Um so yeah again thank you for listening Um and if you're wanting to further your Knowledge check out these links on the Right and yeah that's all from me so I'll open it up to questions now thank You thank you wow thank you very much Matthew That was awesome Um really appreciate everything you just Shared there Um what I'd like to do as Matthew just Said is open it up to questions if you Don't mind raising your hand we're going To do that in some kind of order Um and we'll go kick it off with uh

Forest please kick us off Hi Matthew thank you very cool talk uh It looks super fun to play with Um I I have so many questions super Interesting uh to discuss but uh I guess One of the questions that I'd like to Clear up I've been thinking a lot lately About the comparison between Python and JavaScript and you mentioned when you Were comparing Um tensorflow uh speeds and JavaScript And python that it was javascript's just In time compiling that made the Difference here and what I was curious About was I'm pretty sure that python is Also an interpreted language that does Just in time compiling so Um I guess seeing the difference there Is it like the V8 engine and how it has Those two steps like the early Interpreter and the following Interpreter that makes the difference or Something else yeah yeah really great Question Um My experience with Compilers is definitely not as as much As other people at Google but uh I can Say that Um the V8 team is uh I guess yeah Extremely well uh well funded in the Sense like Google Chrome is one of their Biggest products and they they just put A ton of engineering effort into

Optimizing it as much as possible on the Python side I also want to mention like You can you can also use I think there's Some other python interpreters that do a Bit better than the the Um the standard interpreter I think Cython one that's like Based on C or I think it has some more C Bindings that will automatically Activate or something I don't know Exactly how it works but there are Definitely some really impressive python Interpreters that get great performance Again though Um yeah uh if you're doing Pre-processing uh and you just want to Like really quickly write it out in an Easy to do way uh node can be very good At that if you want to do it like uh I Guess I don't really want to say Properly but Um if you use like numpy and C bindings And and stuff like that just throughout Your your program then yes definitely Python can get same or or better Performance just because it's all Running in those bindings Um yeah so I get I don't want to be Misleading at all there Um yeah uh so I I guess with that point I really want to say like yeah if you're Just doing all the pre-processing in the Language itself then yeah node can in Some cases be faster

Yes thank you awesome awesome uh Adam Take it away yeah Um so a lot of the examples you showed Were of taking like pre-trained models And running inference in the browser as Edge devices get Um better compute and we get like better Apis to Um the their gpus like webinar do you Anticipate that Um a lot more model training will be Happening on the browser because you did Mention like all of the benefits you get From actually training model and Training models there and keeping your Data there great question Um yeah that's that's definitely an open Area of research these days I think so There's a lot of difficult things with Training models on the browser in a way That like scales to the point where you Can get something that's uh like a Useful model uh well okay I don't want To say that other models are not useful But but I guess what I mean is Um if you just train something on Teachable machine and then download it That's essentially just your model and And no one else is really able to use it Um now Google has been working on uh I I Don't know if you've all heard of Federated learning I think it was in the News a while ago as like the the evil New thing that Google was going to use

To track you but um it's it's a it also Has uh it really it has application its Whole thing is about Training a Model A few steps forward in One person's device then saving that Result in a way that's privacy Preserving and then sending that to uh a New person's device to train it further And I think Um this is mostly me speaking from my Own opinion it's not really what Google's doing because I'm not I don't Really have any insight into these areas At Google to be perfectly honest Um uh the Federated learning side of Things I mean Um the uh This kind of thing I think like like you Said uh I think this approach might Might give you the ability to train much Larger scale models on users devices in A privacy preserving way that lets us Um yeah get some really cool like uh Models that we normally wouldn't really Be able to get due to privacy concerns And not wanting to to use users data Um yeah um I hope that that answers your Question Yeah thank you Awesome uh Jared go ahead Thanks hey Matthew great great Presentation I really appreciate it Um To get kind of an insight into

What your role looks like I would love To know if there's a project for tasks That you recently completed or one that You're about to complete that you're Like especially excited about Oh yeah yeah Um so Uh at Google I O I think last year I uh Traded this demo or this code lab to to Show you how to run TF light in node uh With and it uses like node Bindings that Was uh that was actually kind of based Off of our existing TF light on the web Uh Bindings that that run in webassembly So uh that work has uh actually kind of Continued in a sense and what I'm one of The one of the key features about being The node side of things was the ability To use TF flight delegates uh now a Tflight delegate is essentially TF Light's version of Hardware acceleration Or in some cases software acceleration Where you can like say I have the CF Lite model but I want to use the Graphics card to run this part of it and If you have a delegate that can do that You can kind of cut the model graph up Into pieces and send the pieces that are Supported by the delegate to be run Accelerated on the GPU now that all Works in node because TF light is Written in C plus plus and c and um the No bindings just bind to those uh but That doesn't work on the browser yet

Um so the thing I'm really excited about Is uh I've been trying to get this Working on the browser as well Um and and get uh delegates working There I I don't know if it's actually Going to work so if you don't ever see This that's why uh but I that's that's One of the things I'm most excited about And one of the things I'd really like to To get working in the future uh this This would let us do uh do things like Run TF light models on the web with GPU Acceleration or webinar and acceleration Because uh webinar is really good at Like running a graph of a model and TF Light delegates really want to like give You a graph to run so that's kind of a Perfect fit there Um and yeah also with this project I Want to mention Intel has been doing a Lot of the work on the actual delegate Side of things so definitely thanks to Them for helping us push this project Forward Um yeah that's that's pretty much what I've been working on That's really neat thank you Matthew I would love to actually uh just Ask you kind of a kind of a basic Question a little bit on on going back To the work that Adam Um and David from code Smith have done With you on the interoperability between The python version of tensorflow and the

JS version what is that interoperability Mean in real world sense Um how does how does improving that Interoperability kind of show itself Yeah great question so that Uh one of our really big goals is to Just be as easy to uh to make the Library as easy to use as possible and The the fewer steps people have to do in Terms of like pre-processing before Their model and post-processing after it The better and these new offset that um That have been implemented are are Really for those kinds of uh Pre-processing and post-processing steps Um and if you can take a model in Python That uses KRS pre-processing And just convert it to tensorflow Without having to manually rewrite those Ops in uh like JavaScript or something Like that and just call the the like Native tensorflow.js Ops Um that that you guys have written that That makes that process a lot easier Um Yeah so thanks so much for the for the Contributions you've made it's it's Always great to see uh contributions From people who are interested in in the Project Yeah that's awesome actually one of Those contributors Adam you have a Question yeah I actually had a question Just about

Um uh tensorflow Lite Um just what about tensorflow Lite makes It smaller than um tfjs is it just like Kept in that graph representation or Um how yeah what I guess what makes it Smaller than just a normal tfjs model Um so I'm that that I'm not totally sure that That's the case it might be I'm just Actually not sure Um there's so there's tensorflow models Those are usually like saved in a way That you can retrain them or reload Their weights and and continue training Tensorflow Lite models are saved usually For deployments only there's a limited Amount of training you can do in TF Lite With like transfer learning on the last Layer of the model but usually you're Just saving a model for deployment that Is usually the cause of less uh less Weights between tensorflow and Tensorflow Lite as I understand it I I Might be mistaken there as well I'm not Actually all that experience with Training models I mostly work on on the JavaScript side of things so if I'm Mistaken there uh sorry Um on in terms of tensorflow Lite versus Tensorflow.js if you're seeing model Size differences there uh it may be due To how tensorflow Lite stores their Model uh they I think store it as a flat Buffer and they also support loss of

Quantization so you can quantize your Model down to like you into eight and That can save a ton of space versus like Uh float32 or something like that Um Yeah that Uh and then Yeah CF light models are also uh kind of Just a single flat buffer and that does Make it a little bit difficult to load Larger ones on the web uh if you like Kind of fail to load the entire model During the page load you won't get the Caching that you might get with tfjs if You've like loaded some of the four Megabyte chunks and then the page fails To load then you can reload it and those The ones you've loaded so far should Still be cached but yeah that's those Are just some details that I that I know On the model format Um yeah Cool cool thanks awesome awesome thank You again for being with us tonight uh This was fantastic having you here and Sharing everything you did we all Appreciate that tremendously Um you know uh thank you to code Smith Obviously for hosting and for everyone Putting this together Laura obviously Thank you so much Um but you know honestly Matthew we look Forward to working with you again in the Future we're so happy you're a part of

The whole code Smith world and um yeah Thank you for being here have a great Night

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *