Coding 101 53 (Transcript)
Father Robert Ballecer: On this
episode of Coding 101, type script, is it for people who use Java Script? Also
Steve Gibson will stop by to give us some more knowledge on cist vs. risk and
the eternal computing battle.
Netcasts you love, from people you trust. This is TWIT.
Bandwidth for Coding 101 Is provided by CacheFly. At
cachefly.com. This episode of Coding 101 is brought to you by Lynda.com.
Invest in yourself for 2015. Lynda.com has thousands of courses to help you
learn new tech, business and creative skills. For a free 10 day trail visit
Lynda.com/c101. That’s Lynda.com/c101.
Fr. Robert: Welcome to Coding 101. It’s the TWIT
show where we let you into the wonderful world of the code monkey. I’m Father
Robert Ballecer, the digital Jesuit and joining me today is our super special
guest host, Mr. Lou Maresca from Microsoft. He’s a senior development lead. Lou, thank you very much for coming back.
Lou Maresca: Thanks for
having me again padre.
Fr. Robert: Now Lou, we have a nice arrangement
here where you came in and basically knock down what are some of the big
developments in computer programming. And right now we’re talking a little
something about type script. What is type script?
Lou: If you’re a Java Script programmer
today or even if you’re looking to become one, sometimes it’s a little
daunting. And one thing it’s hard to overcome with Java Script is the fact that
you can’t really build- well you can build large applications with it, but it
turns out to be pretty difficult. Padre have you built
large applications in Java Script before?
Fr. Robert: I mean, forgive me if I’m wrong here,
but I’ve always used Java Script as more of a novelty. It does fun stuff on
client’s side, but I would never ever consider using it for something that was
large scale.
Lou: Yeah I think one of the big challenges
with Java Script is really maintaining it and really the problem is not
necessarily with the language itself, it’s just how diligent some of the
developers where when they built it. So a lot of times large applications today
are built with what they call strongly type languages. And what that means is
that it allows you to kind of define the types and then the compiler will then
check them to make sure you’re right. For instance, you can’t add a string
together, you have to kind of add- well you can add a sting together but if
you’re intending to actually display a number or get a number out of it, the
compiler will say no, you’re passing a string into a number and that’s wrong.
So it kind of forces developers to be more diligent in what they’re doing.
Fr. Robert: Hold on, I want to clear something up
right now because there is an entire generation of programmers, myself
included, for whom this conversation is completely unnecessary because I don’t
want to use Java Script, and I’ve always been told Java Script is insecure, it’s
horrible, it’ll lead to exploits. It can’t be used in a serious application.
Why go through all the time and effort to create something that is easier to
use for large scale applications?
Lou: So I think that’s kind of where the
merge of these strongly type languages are coming in to potentially Java
Script. Allowing you to build in like an obsolete type language or an – type
language and then just converting that into Java Script and then you’re really
just maintaining the language that actually capable of maintaining, that can
run through tools and IDEs and other compiling tools and syntax tools and so
on. That then can make it easier to maintain in long term. For instance, Google
has Google Web toolkit which allows you to program in Java which is strongly
type language that compiles down to Java Script. But sometimes it requires a
whole new IDE, development environment and sometimes new ways of thinking. But
then here is where kind of type script kind of jumps in. and type script is
really what they call a super set of Java Script meaning that any valid Java
Script code is also type script. But type script, what it does is, it proves
the IDE, it does analysis of type pins, it makes developer’s intent clear
because you can specify optional types and inferred types. You build out classes. And what this allow you to do is surface any type of bugs or issues that
can basically not be found until you run Java Script code in a real environment
or a run time environment. So that’s kind of where it comes in. and type script
integrates to like IDEs like web storm and eclipse and sublime text and visual
studio and emax and all the ones that all developers kind of like out there. And
really what it adds is a support for classes and optional typing which is
verifying your type safety like integers and strings. All that
compile time. Basically what we used to call in Coding 101, we call
sanitizing your code. And it supports inheritance and generics,
which is types to be specified later sort of speak. But again it
basically allows you- it’s easily convertible between type script and Java
Script and so Java Script developers usually sometimes easily pick it up. But
again, it makes it a lot easier. It makes it more maintainable from a Java
Script perspective, from a large web application perspective.
Fr. Robert: Lou, I understand that type script can
actually take in existing Java Script code. But my question is, does the programmer have to do anything to make sure that
his previously Java Script code will work with a stronger type type script compiler?
Lou: You could just, potentially they could
take a blob of your existing code and shove it into type script and it will
work. But type script has recommendations you can go through documentation as
well as even what they call an online type script playground. And what it’ll do
is show you ways to actually break apart and modulize your code. Modules are
basically like name spaces. A way to kind of group your
classes and your types. It gives you ways to do that so this way you can
break up your code and make it a little bit more maintainable by large teams.
Like if you want one team to work on one part and another team to work on
another part. So yeah, you could just copy your existing Java Script directly
in there, and boom, away you go. But the advantage of Type Script is not that. The
advantage is starting to slowly move your code into modules and break it down
so that you can have it easily maintained by larger groups.
Fr. Robert: I know that type script was created by
the same person who is largely responsible for things like C Sharp. So of
course there is going to be elements from that language in type script, but
could you break it down for the real computer geeks out there. What are the
features, functions, options that I have with type script that I didn’t have
with Java Script?
Lou: There are something’s that you get with
Java Script that’s just really tough- so type script there will be like one
line that you have to write to create like a module or a name space vs. it
might be a little harder to do that in Java Script itself. But it supports
classes and inheritance and object oriented concepts like inheritance and
optional typing. It also supports generics, which is not necessarily supported
in Java Script. Which is types that you specify later that
are basically instantiated when you need a specific type to be provided as a
parameter. There’s modules like I said, and there’s
straight in line Java Script. So there’s very specific
things that might be supported in Java Script themselves, but they’re much
easier to do when you’re doing them in type script. Another thing is some of
the new versions of Java Script support ECMAScript. Which is basically Java
likes classes which compiled down to normal Java Script chains classes. So
that’s important because that means you can just start building a whole bunch
of Java like objects and classes and all this stuff and then it’ll just compile
down to Java Script later and that makes it so that you really feel like you’re
building a large application in a strongly typed language like C Sharp and
Java.
Fr. Robert: Lou, we’ve got a question. More of a statement in the chatroom. They say well look, 10
million Minecraft users can’t be wrong right? Minecraft runs with JavaScript.
My question is this, now that Minecraft is owned by Microsoft will they migrate the code over to type script and is there a reason
for them to do that?
Lou: So the reason for them to do it, it
depends on- I don’t even know how big the Minecraft development board was when
they were building it. And it all depends. Sometimes Microsoft usually has
pretty large development orgs, so 5,000 developers, sometimes it can be really hard to maintain a JavaScript based application. There’s
a project called Mozilla Chemway project which is 170,000 lines of type script
which is basically a HTML 5 product that allows you to convert your flash SWFs
into HTML 5 versions. And they basically built it in typescript because they
didn’t want to have to go and build all this JavaScript to do it. And so I’m
going to guess that if it’s maintained by a large organization and they find
advantage to integrating it into their build system, they probably will start
to slowly convert. And that’s a lot of products are doing that. Even CRM is
slowly converting a lot of their components to typescript definitions. So it’s because again, we have a large
development org and we need to be able to maintain all that code. So there is a
large advantage to doing it. And there are a lot of examples online that say
hey, I used to build in JavaScript but now I’ve moved to typescript, I’ve moved
to Facebook’s flow or Google’s at script which are very similar type projects
out there and they’re all trying to do
the same thing. Make it easier for people to develop large applications.
Fr. Robert: Okay, well maybe some of our folks are
sold. If they want to start messing around with typescript where should they
go? Where are all the resources that they can find for typescript?
Lou: So I think the best place to go is the
typescriptlane.org and they have what they call a playground out there and you
can basically start playing with it right in front of you. You type in your
typescript code in the left and it converts to JavaScript. Plus there’s a
little dropdown box here on the top you can see on the screen where you can see
all the different things that typescript has that JavaScript makes really
difficult. So you can see kind of the code on the left vs. the code on the
right. And again, you can code right in here and see how it will run and
convert over and it basically shows you how easy it is to do. Again, other IDEs
like free IDEs that you can download, like Visual Studio Web will allow you to
download it and integrate that with typescript. Visual Studio Express for web.
And it’ll allow you to use typescript right in there and you can immediately
start writing typescript code and watch it convert to JavaScript right in your
web application. So I encourage you to try that out. It’s really, really
helpful and a lot of large projects are starting to move to it.
Fr. Robert: Why not, you might has well give it a go. Especially if it’s going to be so easy for
you to use it. Lou, unfortunately when we come back we’re going to have
the last episode with Steve Gibson for a while. It’s so much fun to have him on
the show, but he’s got some stuff to do, he’s got to finish up Sqrl and then he’s
got to work on the next version of SpinRite so he’s a busy man. He promises us
that he’s going to be coming back and we’re going to do a proper assembly
module from start to finish, show the folks at home how they put together their
own assembly programs. But before that, do you mind if we take a break to talk
about a sponsor for this episode? Let’s do that. And it’s Lynda. We’re talking
about knowledge and Lynda is all about knowledge. Both of new
knowledge and knowledge that you just need a refresher course on. Lynda.com is an easy and affordable way to help you learn. You can instantly
stream thousands of courses created by experts on software, web development,
graphic design, and more. Lynda.com works directly with industry experts and
software companies to provide timely training, often the same day you get the
new releases on the new versions on the street. You’ll find new courses on
Lynda. So you’re always up to speed. All courses are produced at the highest
quality. Which means it’s not going to be like a YouTube video with shaky video
or bad lighting or bad audio. They take all that away because they don’t want
you to focus on the production, they want you to focus
on the knowledge. They include tools like searchable transcripts, playlists and
certificates of course completion, which you can publish to your LinkedIn
profile. Which is great if you’re a professional in the field
and you want your future employers to know what you’re doing. Whether
you’re a beginner or advanced, Lynda has courses for all experience levels,
which means they’re going to be able to give you that reference that place to
go back to when you get stumped by one of our assignments. You can learn while
you’re on the go with the Lynda.com apps for iOS and Android and they’ve got
classes for all experience levels. One low monthly price of $25 gives you
unlimited access to over 100,000 video tutorials, plus premium plan members can
download project files and practice along with the instructor. If you’ve got an
annual plan, you can download the courses to watch offline. Making
it the ultimate source of information. Whether you’re completely new to
coding or you want to learn a new programming language, or just sharpen your
development skills, Lynda.com is the perfect place to go. They’ve got you
covered. They’ve got new programming courses right now including the Programming
the Internet of Things with iOS, Building a Note taking app for iOS 8, and
Building Android and iOS apps with Dreamweaver CC and Phone Gap. For any
software you rely on, Lynda.com can help you stay current with all software
updates and learn the ins and outs to be more efficient and productive. Right
now we’ve got a special offer for all of you to access the courses free for 10
days. Visit Lynda.com/c101 to try Lynda.com free for 10 days. That’s
Lynda.com/c101. Lynda.com/c101. And we thank Lynda for
their support of Coding 101. Here’s my favorite part of the show where we get
to bring in our code warrior who just happens to be Mister Steve Gibson. Steve, thank you very much for coming back onto Coding 101.
Steve Gibson: Great to be here, padre.
Fr. Robert: Now the last few episodes we’ve focused
on your philosophy of programming. And remember, it’s all about foundations. We
want foundational knowledge so that we can strip away all that high level stuff
and still understand what’s going on beneath. That’s really your philosophy of
learning right?
Steve: Right. I think for whatever reason I
really believe that you get the best quality if you code at the lowest level
that makes sense for you. But also, even if you’re coding at a higher level,
understanding what’s going on down below really makes you a better programmer.
Fr. Robert: Right. You know Steve, I love our
audience, and they’re active in our Google+ group, in the chatroom right now.
All of these people are here every single week and it’s interesting to see how
passionate they get about a language they like. We had a couple people who were
saying “look, it used to be true in the old day that assembly could be faster
but now with today’s current compilers and IDEs, it’s just not true anymore.”
And I think it comes back to what you said last week which was, you find the
language that you can program in and hopefully you’ll be able to figure out
what goes on underneath the language.
Steve: Yeah, and again, as I made more clear last week, I’m not here trying to rage a religious war
against higher level languages. I think they’re fine for people who like them.
But like an hour ago I was writing an algorithm to quickly scan a buffer for
unsafe HTML characters and convert them into the expanded safe versions. And I was
able to do in 3 instructions after counting the number of unsafe characters, I
needed to- because the expansion was going to be four characters larger- I
needed to multiply that count by 4 and add it to the original buffer size in
order to create a new buffer that would be large enough to hold all of the
original characters plus the translations of the unsafe ones into safe. I did
that in 4 instructions. And there is- the problem is that no compiler
understands what it’s coding. It’s translating what the programmer has given it
into equivalent instructions. But that’s the difference. I’m essentially the
compiler compiling assembly language.
Fr. Robert: So Steve, it’s that granular control
right? It’s that ability to really dive in and know exactly what instructions you’re
asking the environment to do, rather than counting on a high level compiler to
figure it out for you.
Steve: Right. Essentially when I’m writing
assembly language, I am the compiler. And I understand the problem I’m trying
to solve, so I can express that in the tools available. Meaning
the instructions that the hardware has. The liability- the problem- any
compiler has is that it isn’t sentient. At least not today. It doesn’t actually understand the problem I’m trying to solve. It’s just translating
the programmer’s translation of their problem into the high level language,
into a lower level language. So essentially I’m cutting out the middle man and
again, I understand there’s an absolute place for higher level languages. But
it is absolutely not the case that I can’t program circles around anyone else
at the assembly language level. By orders of magnitude in
performance.
Fr. Robert: I want to bring Lou back in here. Lou,
after Steve showed us his code yesterday and I realized that in Microsoft’s
assembly I can actually use high level commands along with my assembly code. By
the way, that completely blew me away. I did not know that it had advanced that
much. I was actually able to do hello world. Which took me
forever when I was doing it in my undergraduate days. Which
compares very favorably to Tasm, Borlin’s assembly module that I learned on. Do you still use assembly from time to time? I mean, it’s included in your
suite.
Lou: Yeah I think – somebody pointed out
that the one of the powers of assembly, even in masm today is really the macros
that you can do. So like you were saying, it makes it real easy to inline
macros into the code and then it easily converts down, there’s no overhead to
it like Steve was pointing out last week. So really coding in
that makes it a lot more efficient. And yeah, some of the algorithms we
actually do in assembly in line with the C++ code because that’s a lot more
efficient than sometimes some of the different ways that the different
libraries and other types of libraries that use APIs and STKs that we use that
do that types of things. So we just guarantee that we have the raw performance
that we need there.
Fr. Robert: Two people who program for a living,
who are both telling you that yeah, we still use assembly. Now Steve, lets back away from this. Because
we’re going to save a lot of that discussion for when we actually do the
assembly module. Of course, after you get done doing the million things
you have to finish for Sqrl and the next version of Spin Rite. But one of the
questions we got a lot after the last episode, and actually in the chatroom
quite a bit, is they were hoping that you would bring your knowledge to the
debate between RISK and CISK architecture. I remember when I was an undergrad,
this was huge. Because of course there was a huge gap between RISK and CISK.
And CISK was the PC. It was the x86 and RISK was the Mac. We know that those
have come together but I think it’s still a valid debate to talk about the
differences between those two types of architecture. First of all, what was
CISK vs. RISK?
Steve: Well okay, so to put this in a
contemporary context, I think what’s interesting is that everybody has arm
based, arm designed processors in their mobile devices which are crucially
power sensitive applications. Nobody, except in laptops that have had a hard
time keeping themselves alive for more than a few hours until more recently,
and the batteries are vastly bigger in a laptop than in a phone for example. So
even today the arm architecture, which is a RISK architecture,
clearly has some advantages over the older CISK style Intel architecture. So
the way to think about this is to understand the history of the way things were
in the beginning. As I have mentioned a couple times, once upon a time, memory
was excruciatingly expensive. If you were on a desert island and had to create
memory, like literally had no resources, how would you do it? And so for
example, the early pioneers back in the 40s and 50s, even before vacuum tubes
and transistors, they used relays. And you could wire up a relay so that when
the coil was energized, the armature would get pulled down, and it would close
a contact. That contact could keep the armature energized. So that means once
the relay closed, it stayed closed, kind of by itself. And then if you briefly
interrupted the current, it would let go and then even if then you closed that
interruption, it would stay off because it had already let go and the contact
had opened. So there’s an example of a really cludgy one bit
memory. But once upon a time, that’s all we had. And I tell you, when
you build a computer with those and fill a room full of it, you need to wear
earplugs when this thing is running, because all of these relays are clackity
clacking around in order to do the work. And that’s where we began. Then we
moved to tubes where you had a small glass bottle, essentially, with a low
voltage heater and it was heating up a cathode in the tube in order to boil
electrons off of the cathode and you had a high voltage plate which was
attracting these electrons and in-between some mesh grids. And that’s what a
tube was. In Britain they called them valves. That was, from a tube, you could
create an inverter, where if the input was high, in a little circuit, the output
was low. If you connect two inverters back to back, sort of in a circle, then
if the input of the first one is high, the output of it will be low, and if
that goes into the second one, its input is low, so its output is high, and if
you hook that back around to the first one, you’ve got these two little
inverters and they’re stable. That is, it’ll stay that way. Now if you were to
briefly yank the input of the first one high, then its output would go low and
the second one’s output would go high and that would come back around and
remember that. And that’s called a bi-stable multi-vibrator. Because
it’s stable in two different states. So now we have two tubes which can
remember one bit. And they’re quiet but they’re burning out all the time. And they’re
producing, because you have to boil electrons off of the cathode, they’re like
little heaters. In fact, the low voltage winding in there is called a heater
and they have to warm up in order to function. And so we’ve replaced this
incredibly loud clackity process with something that is at least quiet and it’s
much faster. Because now we’re just switching electrons rather than moving
metal plates up and down. But we’re producing a huge amount of heat. And we
have the problem that these tubes burn out. And when you’ve got 10,000 of them,
every time you turn the computer on, some are just going to burn out. So then
you go around trying to find the ones that burn out and it takes a while to get
the computer going. We moved from there to transistors, thank goodness. And did the same thing. Two transistors can be two inverters
and remember one bit of data. The problem with all of that is that these are
volatile. Meaning if you turn the power off, you lose the state of that
bi-stable multi-vibrator, it forgets. So early pioneers had to come up with a
way of creating a memory that wouldn’t get lost when you turned the power off,
or when someone tripped over the cord. Again, if you think, now what do we
have, what possibly do we have that won’t forget something? And these early
guys were very clever they figured out that they could use magnetism in order
to statically remember something. So what they did was they used little
doughnuts of Ferris material to create what has now become well known almost
now in lore, because no one has it anymore, a so called core memory. These
little doughnuts were individual cores. And they could be magnetized in one
direction, clockwise, or in the other. And when you turned the power off, they
would stay that way. Which was just a blessing back then.
Fr. Robert: Hold on one second. I have to throw out
this. Vacuum tubes were really my first exposure to something that would become
my fascination with integrated circuits. And I remember the vacuum tube testers
at Radio Shack. And I mention that because they’re now going away. But that’s
where I would take my tubes to see if they were still good for me to experiment
with. But also, I’m getting this from the chatroom, most of the chatroom,
hearing you talk about this, you’re doing some serious
face melting right now. And we love it.
Steve: So now, what I want to impress upon us
is the expense of a bit of memory. It is, back in those early days, memory was,
first of all it was impossible, and then it was possible with earplugs. Then it
was possible with good air conditioning, and patience. And so finally we came
up with a way of using magnetism to give us nonvolatile memory. But in order to
create a useable amount of that, we would take these little tiny cores and
thread them with wires in order to magnetize and demagnetize them and sense
when the polarity of magnetization reversed. So that was a plane of memory
core. But even then 4,000 of them times, for example, 12, 12 planes of 4,000
cores was a huge amount of work to create. And once you got it done, you had 4k
words of memory. I mean, nothing in current standards.
But that’s the kind of memory they had to work with. So the point of all of
this is if memory is incredibly expensive, you have to arrange-
Fr. Robert: How much memory do you think is here
Steve, like one of these arrays?
Steve: Maybe 32k? bits.
And the other thing that’s tricky about that is in order to know what’s in the
core, you have to destroy it. You have to write zeros to a specific core
through a set of planes. And only the cores that switch from a 1 to a 0, will generate an impulse on their sense wire. So that
tells you the ones that were 1s generated that. But in the process of
determining that, notice that we just had to kill it. We had to writes zeros to
it. So that’s called destructive read out because it destroys the data in order
to get it. So that meant that every time you read something you had to write it
back in order to have it still be there again. But
there was actually something rather clever that the designers took advantage
of. And that was called a so called read, modify, write. Because if you knew
that you might be changing the data that you just read, for example you were
adding a value to something in memory, you could read the contents, which would
destroy it, modify the contents, with the result of an addition, and then write
the new value back. So these guys were like really taking advantage of every
benefit they had. But the point of this is, if you have essentially no memory
because the memory is too bulky. A single bit is a little magnetic core and we
just saw pictures of these big modules that may have 32k bits in them. What
that means is, that if you’re going to use this memory
to contain the instructions to drive a CPU, you need the value that you’re
storing. The amount of information, to be as great as
possible. In other words, you need complex instructions. You need to be
able to have a small set of bits, specify something significant for the CPU to
do. And if you’re able to do that, then you’re able to economize on the number
of instructions that you need. So that’s one part of that. The other part is
that back in those days, human programmers were still dinosaurs, they were
newly created creatures, I’m a dinosaur. Because I’m still programming in that language. But
programmers were actually programming in that language. So they wanted the most
complex instructions too. They wanted, with a few instructions, to be able to
get a lot of work done. So there was a balance that was struck in a little bit
of memory and a lot of complexity in the processor, that was good because memory was incredibly expensive and hard to come by. And so
were programmers. And so you needed programmers that didn’t have to write to
much code in order to get a lot done. So that’s complex instruction sets. I won’t
go into it in any farther detail, we can certainly do
that easily any time in the future. What happened over time is that that world
changed. Memory increasingly dropped in price. To the point where now it still
makes my head spin when I look at the gigabits you can get on a little strip
for $10. It’s unbelievable compared to the memory back then. Its
volatile memory because it’s actually stored in compositors that tend to leak
over time. Which is why you have to refresh the
contents continually. You have to go back and read it before the
contents has had a chance to leak away to the point where you can’t tell if you
used to have a 1 or a 0 in there. But that technology is extremely dense and
thus extremely inexpensive to manufacture. And we have nonvolatile memory in
SSDs or in hard disk drives so we sort of load the volatile memory on the fly
when we boot our computers up. So the way they operate has changed over time. But
mostly the amount of memory available and the expensive of memory available has just dropped. And that’s allowed us to create an
intermediate layer. The compiler, that insulates the
programmer who wants to think in sort of more abstract, larger, broad brush
terms. That’s insulated the programmer from what the computer is doing
underneath. Programmers don’t need to know that. And so those two things
brought about an evolutionary change in the way computers were designed. One of
the things that happened when compilers started to be created for complex
instruction set computers is that when they examined the instructions that were
being used, the architects of the computers discovered that the compilers
weren’t using a lot of them. They had built in fancy instructions that did all
kinds of cool stuff. For example the height of that was the digital equipment
corp. the DEC vex architecture. There were instructions in there for doing
things like managing linked lists in the machine language. Which
is something you normally do at the high level. This actual chip would
manage linked lists and all kinds of advanced data structures because that’s
sort of the direction they went in. but if a programmer didn’t have an exact
match for the way they wanted to link items together, then the instruction that
was sitting there begging to be used for that wouldn’t get used. It couldn’t be
used. But more importantly, the compiler turned out not to really be able to make
use of these complex instructions. What they found was that it was the simple instructions
that the compiler was using a lot more of rather than the complex instructions
that were there. But notice also, that those complex instructions cost money.
It costs money in terms of silicon area in order to create them and it was
money that was being wasted. Because they were not being
used. So all of these various pressures got people to
rethink the proper architecture moving forward. And it’s from that
rethinking that the concept of a reduced instruction set came about. The idea
being that programmers who actually had to code that would just shoot
themselves, because it was painful to hand code a RISK instruction. But the
point was, humans weren’t supposed to. That was now
the proper domain of the compiler. Because what a reduced instruction set meant
was that you could have a vastly reduced silicon size. The chip itself could be
much smaller because the instructions it had could be much simpler. You didn’t
need all of the area, the land mass, required to support all of these complex
instructions. And notice the other expense of complex instructions
that are not being used, is they still have all their transistors there
burning up power. And wasting it because they’re not being
used. With the reduced instruction set, you’re highly using the few
instructions you have. So they are far more power efficient. You’re not wasting
power on stuff that you’re not using. So that’s essentially the tradeoff. And
it’s the reason why INTEL architecture, which is classic CISK architecture,
which I’m still programming in assembly language, it’s the reason it’s had a
very difficult time surviving in the mobile world. INTEL can’t take
instructions out because that would break code. So they’re sort of jammed up. They’re
stuck. Whereas arm guys were able to start from scratch, create a very elegant,
simple chip architecture that’s just glided right into the mobile world without
batting an eye. And they are far more power efficient than the equivalent INTEL
chip.
Fr. Robert: Steve, I want to bring Lou in here
because there is a very important aspect of this CISK vs. RISK architecture
conversation. This philosophy which you brought in at the end there which is,
INTEL’s legacy is X86. It’s CISK. Which means it did fantastic, it dominated the desktop/laptop space. But as we’ve moved into this mobile
world, they just can’t do it. They can’t bring it over. Their atom is kind of
there, but not nearly as power efficient or as program efficient as an arm
processor from mobile processors. Lou, let me ask you about that, because
Microsoft has been trying to make strides in making server software that will
run on arm boxes for that very reason. You can put a lot of those in a very
small space. Not generate that much heat and not waste that much power. For a
programmer, right now, and some people in the chatroom are saying “wait a
minute, I don’t need any of this”, this is still an incredibly important
distinction to know, yes?
Lou: I think it’s important to know because
it all depends on what you’re trying to apply your code to. So if you want to
apply your code to enter of things type things, you know, are you going to need
to be powered plugged into a battery, is it going to be sitting there for a while?
Or are you building a large scale application that like you said, needs to run
on a server and doesn’t need a lot of CPU power or
memory behind it, that sort of thing. Scalability and that
kind of thing. So it’s really important to distinguish between the two
but you’re right. now a days, there is a lot of wasted resources, especially
CPU resources and that’s kind of where some of these cloud services are coming
in. where they say we’re going to share the resources of these large massive
machines, and make it utilized to be 80-90% so that we’re utilizing almost all
of its CPU and memory across many different users and services other people are
hosting on it so we make it more efficient and more scalable even from a monetary
standpoint. But again, they’re still not as power efficient as if you were to
run this whole scale or paralyzed version of arm processors. But arm processors
are targeted to very specific things. So it’d be very difficult to move a
server to the arm side of things and I’m surprised, they’re taking strides to
do that, but I’ve heard that they are starting to do that.
Fr. Robert: Steve Gibson, thank you so very much
for being on this episode of Coding 101. When you come back, will you be able
to do a full assembly module with us? From start to finish, show these young
whippersnappers how it’s done?
Steve: I think we should. It would be fun. I
found that project over on Google that I talked about last week. The Pep/8 project. I like it because it’s a synthetic
computer but it is cross platform. It’s free and it’s available for Windows,
Mac and Linux. And I think that’s crucial because then everyone is able to look
at the examples and play with the stuff that we create. And yeah, we can just
go through and start with the basics of bringing things from memory and adding
them and putting them back in memory. And messing with loops
and play with Fibonacci numbers and find primes and all that in assembly
language.
Fr. Robert: Fantastic. Of course people can always
find you on Security Now on Tuesday at 1pm. moving to 1:30 in March. You can
find him there, if you want to see what he’s
developing you have to stop by grc.com. Which by the way, is a discussion in
itself how early you had to be on the internet to get a 3 letter domain. But
there you’re going to find SpinRite, which is bar none, if I could only have
one tool with me when I go out on a troubleshooting ticket, that’s it. It
solves hardware problems, it solves storage problems. So SpinRite, and you’re going to be up to SpinRite 6.0.
Steve: We’re at 6.0 now, we’re going to be doing a 6.1, probably a 6.2 afterwards. I want to take
responsibility for the fact that I haven’t updated it in a decade and things
have changed in 10 years. So I’m going to make 6.1 and 6.2 free for everyone.
6.1 will no longer use the bios. It’ll go directly to the hardware and I’ve
seen something like a half a terabyte per hour performance that we were
getting. I’ve already got the bare metal new code running and we’re
benchmarking it. So that allows you to do a 4 terabyte drive in 8 hours. Which is a huge performance improvement. It’ll run natively
on the Mac. The only reason it doesn’t now is that the Mac has a USB esk
keyboard and so I’ll be able to work with that. And then 6.2 is going to add deeper USB support. And then my plan is, depending on what else comes up in the meantime, to do a version 7 that will do
a whole bunch of things beyond what SpinRite has ever done before.
Fr. Robert: And of course, you’re still working on
Sqrl which, as we know, could potentially replace passwords. Change the way we
think about authentication.
Steve: It has the potential to do it. If the
industry picks it up, it can work. The demo is now online and working, we’re in
the process of polishing it now so I don’t have links to it yet, but we’ll be
talking about in the future and it is a viable, feasible complete replacement
for the whole user name and password mess.
Fr. Robert: Steve Gibson, I believe that people are
now saying that you are Breaking Bad Gibson. And I believe Brian has something
for us to remember. You are the one who compiles. Thank you for being on Coding
101.
Steve: Thanks so much, my pleasure.
Fr. Robert: We also want to thank our super special
guest host, Lou Maresca. Sr. Lead for Microsoft, for being
our guest co-host. Lou, could you tell the people where they can find
you?
Lou: On twitter, @LouMM,
and about me, LouMM. And check out soon, LouisM.com. For some of my projects
that are coming out. Hopefully they’ll be gold nuggets coming out of my head
too. And then my work during the day is at crm.dynamics.com.
Fr. Robert: Don’t forget you can find the notes for
every episode. Links to the stories that we talk about, and when we do the
coding episodes, if you want to find our code examples and assets, just go to
our show page. You can find us at twit.tv/coding. There you’ll find our entire
back episodes. We’ve officially done a year plus of episodes of Coding 101. And
it’s a good place for you to find entire modules. If you want to find a C Sharp
module, or a PHP module, or our upcoming modules on embedded programming,
that’s where you want to go. Also don’t forget we have a G+ group. If you go
into G+ and look for Coding 101, you’ll be able to find us. It’s a great place
to find out what’s been going on in our community. It’s filled with experts,
beginners and intermediate programmers. So if you’ve got a question or an answer,
go ahead and join up. We do this show live every Thursdays at 1:30 pm, soon to
be 2:30 on Monday. If you are watching live, you can jump into our chatroom at
irc.twit.tv. I want to thank everyone who makes this show possible, to Lisa and
Leo for letting me do this show, and also a super special thanks to my TD,
mister Cranky Hippo himself. Bryan Burnett. Bryan, where can the folks find you
on the TWiT TV network?
Bryan: On
twitter @Cranky_Hippo.
Fr. Robert: Until next time, I’m Father Robert
Ballecer, next week we’re starting our embedded programming module. So 4
episodes where we’re we’ll be showing you how to take an AT mega chip set from
the Arduino and turn it into something useful in the real world that requires
both hardware and software skills. After that I believe we have plans for a
Ruby module. So we just did our Steve Gibson super ivory tower, super
theoretical segments, we’re now going to get right back into the trenches with
you to teach you some of the latest and greatest in programming. Until next
time, end of line!