| title | date | author | mainfont | monofont | fontsize | toc | documentclass | header-includes |
|---|---|---|---|---|---|---|---|---|
JavaScript |
Summer 2020 |
Thilan Tran |
Libertinus Serif |
Iosevka |
14pt |
true |
extarticle |
\definecolor{Light}{HTML}{F4F4F4}
\let\oldtexttt\texttt
\renewcommand{\texttt}[1]{
\colorbox{Light}{\oldtexttt{#1}}
}
\usepackage{fancyhdr}
\pagestyle{fancy}
|
\newpage{}
-
JavaScript (JS) is a programming language that is one of the core technologies on the Internet (alongside HTML and CSS):
- JS is used for handling client-side behavior of web applications, and allows for interactive web pages
- as a programming language:
- JS is a multi-paradigm language that is imperative, functional, and event-driven:
- JS features a C-style syntax, dynamic typing, first-class functions, and prototype-based object-orientation (rather than class-based)
- JS uses just-in-time compilation (like Java)
- JavaScript and Java are distinct, similar only in name and syntax
- JS is specified by the ECMAScript (ES) specification
-
note that there are key JavaScript features that are not JavaScript language features:
- eg. JavaScript is written to run and interact with browsers, using primarly the DOM API:
- eg.
var el = document.getElementById("foo") - DOM API is not controlled by the JS specification or provided by the JS engine
getElementByIdis a built-in method provided by the DOM from the browser, which may be implemented in JS or traditionally C/C++
- eg.
- eg. input and output:
- eg.
alertandconsole.logare again provided by the browser, not the JS engine itself
- eg.
- eg. JavaScript is written to run and interact with browsers, using primarly the DOM API:
\newpage{}
-
primitive builtin types:
stringeg."hello world"- single vs. double-quotes are a purely stylistic distinction
numbereg.42booleaneg.trueandfalsenullandundefined- the undefined value is behaviorally no different from an uninitialized variable
objecteg. in literal formvar obj = { foo: "bar" }, or in constructed formvar obj = new Object()andobj.foo = "bar"- literal and constructed form result in exactly the same sort of object
- an object is a compound value with properties ie. named locations
- properties accessed through dot-notation
obj.fooor bracket-notationobj["foo"]
-
conversion between types is done through explicit and implicit coercion:
- with explicit coercion, the type cast is explicitly specified in the code eg.
var a = Number("42") - with implicit coercion, the type cast occurs as a non-obvious side effect of some other operation eg.
var a = "42" * 1coerces a string to a number implicitly- note that arrays are default coerced to strings by joining the values with
,in between- eg.
[1,2,3] == "1,2,3"is true
- eg.
- while objects are default coerced to the string
"[object Object]"
- note that arrays are default coerced to strings by joining the values with
- with explicit coercion, the type cast is explicitly specified in the code eg.
-
wrapper objects:
- wrapper objects ie. "natives" pair with their corresponding primitive type to define useful builtin functions
- eg. the string in
"hello".toUpperCase()is automatically wrapped or "boxed" into theStringobject that supports various useful string operations - other wrapper objects include
NumberandBoolean
-
arrays and functions are specialized object subtypes:
- arrays are objects that hold values of any type in numerically indexed positions:
- eg.
var arr = ["hello world", 42, true] arr[0]gives"hello world",arr.lengthgives 3
- eg.
- functions are also an object subtype:
- note however that
typeof funcgives"function"not"object" - as objects, functions can also have properties
- as first-class values, functions are values that can be assigned to variables:
- JS has anonymous function expressions and named function expressions
- eg.
var foo = function() {}vs.var foo = function bar() {}
- an immediately invoked function expression (IIFE) is another way to execute a function expression immediately:
- eg.
var x = (function foo() { console.log(42); return 1; })()immediately prints 42 and assigns 1 to x - the first outer
()prevents the expression from being treated as a normal function declaration - the next
()immeidately executes the function - often used to declare variables that do not affect the surrounding code
- the declared function is not accessible outside of the IIFE
- eg.
- note however that
- arrays are objects that hold values of any type in numerically indexed positions:
-
identifiers in JS are
[a-zA-Z$_][a-zA-Z$_0-9]*- nontraditional character sets such as Unicode are also supported
- execpting reserved words such as
for, in, if, null, true, false
-
"truthy" and "falsy" values are automatically coerced to their corresponding boolean values by JS:
- the complete list of JS falsy values are:
"",0, -0, NaN,null, undefined, andfalse
- anything that is not falsy is truthy:
- note that empty arrays and objects coerce to true, as well as functions
- the complete list of JS falsy values are:
-
there are four equality operators in JS:
==, ===, !=, !==- double equals checks for value equality with coercion allowed
- eg.
"42" == 42is true - note that
nullis a special case that is equal tonullorundefinedonly- eg.
null == ""is false
- eg.
- eg.
- while triple equals or "strict equality" checks for value equality without allowing coercion
- ie. checking both value and type equality
- eg.
"42" === 42is false
- for non-primitive values like objects:
==and===check if the references match, rather than the underlying values- eg.
[1,2,3] == [1,2,3]is false
-
there are four relational comparison operators in JS:
<, >, <=, >=- usually used with numbers, as well as strings
- there are no strict comparison operators
- like equality, coercion rules apply:
- eg.
41 < "42"is true - eg.
"42" < "43"is true lexicographically - eg.
42 < "foo"and42 > "foo"are both false since"foo"is coerced toNaN, which is neither greater nor less than any other value- note that
NaNdoes not equal anything, even itself
- note that
- eg.
42 == "foo"is false
- eg.
Example comparison coercions:
true + false // 1 + 0 -> 1
[1] > null // "1" > 0 -> 1 > 0 -> true
"foo" + + "bar" // "foo" + (+"bar") -> "foo" + NaN -> "fooNaN"
[] + null + 1 // "" + null + 1 -> "null" + 1 -> "null1"
{} + [] + {} + [1] // +[] + {} + [1] -> 0 + "[object Object]" + [1] ->
// "0[object Object]1"
! + [] + [] + ![] // (!+[]) + [] + (![]) -> !0 + [] + false ->
// true + "" + false -> "truefalse"-
the
varkeyword declares a variable belonging to the current function scope, or the global scope if at the top level:- JS also uses nested scoping, where when a variable is declared, it is also available in any lower ie. inner scopes
- inner scopes ie. nested functions
- without
var, the variable is implicitly auto-global declared- can use strict mode with the
"use strict";declaration, which throws errors such as disallowing auto-global variables
- can use strict mode with the
- in ES6, block scoping can be achieved instead of function scoping using the
letdeclaration keyword- allows for a finer granularity of variable scoping
- JS also uses nested scoping, where when a variable is declared, it is also available in any lower ie. inner scopes
-
in JavaScript, whenever
varappears inside a scope, that declaration is automatically taken to belong to the entire scope:- this behavior is called hoisting ie. a variable declaration is conceptually moved to the top of its enclosing scope
- variable hoisting is usually avoided, but function hoisting is a more commonly used practice
Illustrating hoisting:
var a = 2;
foo(); // foo declaration is *hoisted*
function foo() {
a = 3; // a declaration is *hoisted*
console.log(a); // 3
var a;
}
console.log(a); // 2- closures are a way to remember and continue accessing the variables in a function's scope even once the function has finished running:
- an essential part of currying in functional programming languages
- closures are also commonly used in the module pattern
- allows for defining private implementation details, with a public API
Closure example:
function makeAdder(x) {
function add(y) {
return y + x;
}
return add;
}
var plusOne = makeAdder(1); // returns ref to inner add that has bound x to 1
var plusTen = makeAdder(10); // returns ref to inner add that has bound x to 10
plusOne(41); // gives 42
plusTen(41); // gives 51Module example:
function User() {
var username, password;
function doLogin(user, pw) {
username = user;
password = pw;
...
}
var publicApi = { login: doLogin };
return publicApi;
}
var bob = User(); // not new User, User is a function
bob.login("bob", "1234"); // binds variables from the *instantiation* of User
// even though User function itself has returned- the
thiskeyword in a function points to an object:- which object it points to depends on how the function was called
- dynamically bound
thisdoes not refer to the function itself- not exactly an object-oriented mechanism
- which object it points to depends on how the function was called
this example:
function foo() {
console.log(this.bar);
}
var bar = "global";
var obj1 = {
bar: "obj1",
foo : foo
};
var obj2 = {
bar: "obj2"
};
foo(); // "global", this set to global object in non-strict mode
obj1.foo(); // "obj1", this set to obj1
foo.call(obj2); // "obj2", this set to obj2
new foo(); // undefined, this set to brand new empty object- the prototype mechanism in JavaScript allows JS to use an object's internal prototype reference to find another object to look for a missing property:
- ie. a fallback when an accessed property is missing
- the internal prototype reference linkage occurs when the object is created
- could be used to emulate a fake class mechanism with inheritance, but more naturally is used for the delegation design pattern
Prototype example:
var foo = { a: 42 };
var bar = Object.create(foo); // creates bar and links it to foo
bar.b = "hello";
bar.b; // "hello"
bar.a; // 42, delegated to foo-
JavaScript as a language has been constantly evolving:
- ECMAScript specifications change, currently on ES6
- older browsers do not fully support ES6 JS
- two methods to achieve backwards compatibility with older versions, polyfilling and transpiling
-
a polyfill takes the definition of a newer feature and produces a piece of code that is equivalent behavior-wise, but is still able to run on older JS environments:
- note that some features are not fully polyfillable
- different polyfill libraries available for ES6, eg.
ES6-Shim
Example polyfill for Number.isNaN for ES6:
if (!Number.isNaN) {
Number.isNaN = function isNaN(x) {
return x !== x; // NaN not equal to itself
};
}- transpiling converts newer code into older code equivalents:
- there is no way to polyfill new syntax added in new ES versions
- source code with new syntax must instead be transpiled into an old syntax form
- transpiler is inserted into the build process, like the code linter or minifier
- eg. Babel and Traceur transpile ES6+ into ES5
- there is no way to polyfill new syntax added in new ES versions
Transpiling ES6 default parameter values:
// in ES6:
function foo(a = 2) { console.log(a); }
// transpiled:
function foo() {
var a = arguments[0] !== (void 0) ? arguments[0] : 2;
console.log(a);
}\newpage{}
-
there are seven builtin JavaScript types:
null, undefined, boolean, number, string, object, symbol- the
typeofoperator inspects the type of the given value:- eg.
typeof undefined === "undefined",typeof 42 === "number" - note that
typeofgives"function"for functions, even though functions are a subtype of object - note the special case
typeof null === "object"- to check for
null, note that it is the only falsy value thattypeofreturns"object"for
- to check for
- eg.
-
variables that have the value
undefinedhave no value currently:- note that undefined is distinct from undeclared
- an undefined variable has been declared, but at the moment has no value in it
- interestingly,
typeofon an undeclared and undefined variable gives"undefined"for both- however,
typeofdoes fail for temporal deadzone references
- however,
- thus to perform a global variable check:
if (typeof DEBUG !== "undefined")works, whileif (DEBUG)throws an error if undeclared- alternatively,
if (window.DEBUG), since object property check does not throw an error
- JavaScript arrays are containers for any type of value:
- no need to presize arrays, arrays start at length 0 and can resize as values are added
- arrays are numerically indexed, but as objects, they can still have string keys and properties added:
- these properties do not count towards the
lengthproperty of the array - unless the string value can be coerced to a number, in which case it will be treated as a numeric index
- these properties do not count towards the
- array gotchas:
- sparse arrays have empty or missing slots
- setting the length of an array without setting explicit values implies that the slots exist:
- ie. implicitly creates empty slots, can also be done with
new Array(len) - has issues with serialization in browsers, as well as certain array operations failing
- eg.
mapwill fail because there are no slots to iterate over, whilejoinworks because it only loops up to thelength
- ie. implicitly creates empty slots, can also be done with
- using
deleteon an array value will remove that slot from the array, but thelengthproperty is not updated
- to create an array from array-like objects (such as DOM queries, etc.):
- borrow
sliceon the value eg.Array.prototype.slice.call(arrLikeObj)
- borrow
Illustrating array nuances:
var a = [];
a.length; // gives 0
a[0] = 1;
a["2"] = [3];
a["foo"] = 2;
a[1]; // gives undefined
a[2]; // gives [3]
a.foo; // gives 2
a.length; // gives 3-
JavaScript strings are very similar to arrays of characters:
- both are array-like, have a
lengthproperty, and haveindexOfandconcatmethods - however, strings are immutable:
- individual characters can be accessed but not set using array indexing or
charAt - thus string methods create and return new strings, while array methods perform changes in-place
- individual characters can be accessed but not set using array indexing or
- nonmutation array methods can be borrowed on strings:
- eg.
Array.prototype.join.callorArray.prototype.map.call - however, borrowing mutator methods such as
Array.prototype.reverse.callwill fail since strings are immutable
- eg.
- hack for reversing strings:
str.split("").reverse().join("")
- both are array-like, have a
-
string methods (from the wrapper
String.prototype):- none of these methods modify the string value in place
indexOf, charAtsubstr, substring, slice, trimtoUpperCase, toLowerCase
-
JavaScript has just one numeric type that includes both integer and decimal numbers:
- in JS, there are no true integers, as in other languages
- all numbers stored in IEEE floating point
- supports exponential form, as well as
0x 0b 0oforms for hex, binary, and octal, respectively - the automatic boxing of primitive numbers into the
Numberwrapper gives access to methods:-
toFixedspecifies how many decimal places to represent the value -
toPrecisionspecifies how many significant digits to represent the value
-
-
Number.isIntegertests if a value is an integer, andNumber.isSafeIntegertests if a value is a safe integer - note that
.will be interpreted as a numeric character before a property accessor:-
42.toFixed(3)is a syntax error, while42..toFixed(3)is not
-
-
the infamous side effect of floating point representation is rounding error:
-
0.1 + 0.2 === 0.3is false - small decimal values should be compared with respect to a tolerance value for rounding error:
- this tolerance value is
Number.EPSILONor$2^{-52}$ for JavaScript specifically
- this tolerance value is
-
-
number ranges:
-
Number.MIN_VALUEandNumber.MAX_VALUEfor floating point values -
Number.MIN_SAFE_INTEGERandNumber.MAX_SAFE_INTEGERfor integers
-
-
note that some numeric operations such as bitwise operators are only defined for 32-bit numbers:
- to force a number value
ato a 32-bit signed integer value, usea | 0as a bitwise no-op
- to force a number value
-
nonvalue values
nullandundefined:- different ways to distinguish between
nullandundefined:nullis an empty value, whileundefinedis a missing valuenullhad a value and doesn't anymore, whileundefinedhasn't had a value yet
- note that
undefinedis a valid identifier, whilenullis not - the
voidoperator voids out any value so that the result of the expression is alwaysundefined:- eg.
void 0,void true,undefinedare all identical - can be used to ensure an expression has no result value, even if it has side effects
- eg.
- different ways to distinguish between
-
NaNor "not a number" represents invalid or failed numbers:- note that
typeof NaN === "number" NaNis never equal to itself- can check for
NaNwithNumber.isNaN
- note that
-
infinities:
1 / 0 === Infinityie.Number.POSITIVE_INFINITY-1 / 0 === -Infinityie.Number.NEGATIVE_INFINITY
-
zeros:
- JavaScript has positive and negative zeros
- eg.
0 / -3 === -0and0 * -3 === -0 - note that stringifying a negative zero value always gives
"0"- but the reverse operations result in
-0, eg.+"-0" === -0
- but the reverse operations result in
- in addition, note that
0 === -0 - can also check for
NaNand-0usingObject.is(a, b)
- in JavaScript, there are no pointers, so references work differently from other languages:
- the type of a value alone controls whether that value is assigned by value-copy or reference-copy:
- primitives always assign by value-copy, including
null, undefined, symbol - compound values like
object, array, functionand wrappers always assign by reference-copy- thus changes are reflected in the shared value when using either reference
- primitives always assign by value-copy, including
- the type of a value alone controls whether that value is assigned by value-copy or reference-copy:
Illustrating reference-copy nuances:
function foo(x) {
x.push(4);
x = [4, 5, 6]; // this assignment does *not* affect
// where the initial reference points
x.push(7);
}
function bar(x) {
x.push(4);
x.length = 0; // empty array in-place
x.push(4, 5, 6, 7); // mutate array
}
var a = [1, 2, 3];
foo(a);
a; // gives [1, 2, 3, 4] and not [4, 5, 6, 7]
var b = [1, 2, 3];
bar(b);
b; // gives [4, 5, 6, 7] and not [1, 2, 3, 4]-
in order to pass a compound value by value-copy:
- must manually make a copy of it, so the passed reference no longer points to the original
- eg.
foo(a.slice())
-
in order to pass a primitive value in a way that its value updates can be seen, like a reference:
- must wrap the value in another compound value that can be passed by reference-copy
- note that we cannot simply use the primitive's wrapper class:
- the underlying scalar primitive in a wrapper is immutable
-
natives are builtins that when construct called, create an object wrapper around the primitive value:
- eg.
String Number Boolean Array Object Function Symbol- as well as
RegExp Date Error
- as well as
- because primitive values don't have properties or methods, natives are needed to wrap the value
- JS will automatically box primitive values to fullfil property accesses
- eg.
new String("abc")creates a wrapper object for the primitive string
- note that boxing a boolean false creates a truthy value, since objects are truthy
- unboxing can be done with the
valueOfmethod, or can happen implicitly when the native becomes coerced - while primitives become wrapped, arrays, objects, functions, and RegEx values are the same, whether created literally or with the constructor form:
- ie. there is no unwrapped value
- eg.
-
although most of the native prototypes are plain objects:
Function.prototypeis an empty functionRegExp.prototypespecifies empty RegExArray.prototypeis an empty array
-
the
[[Class]]property is a classification for values that aretypeofobject:- property can be accessed by borrowing
Object.prototype.toStringon the value - eg.
Object.prototype.toString.call([1, 2, 3])gives"[object Array]" - primitive vaues are boxed, eg.
Object.prototype.toString.call(42)gives"[object Number]" - note that the
[[Class]]value fornullandundefinedare"[object Null]"and"[object Undefined]", even though no such native wrappers exist
- property can be accessed by borrowing
\newpage{}
- converting a value between types is called type casting when done explicitly, and coercion when done implicitly:
- alternatively, type casting occurs at compile time, and type coercion occurs at runtime
- note that JavaScript coercions always result in one of the scalar primitive values
string, number, boolean
-
abstract value operations specify the internal conversion rules used by JavaScript:
- eg.
ToString, ToNumber, ToBoolean, ToPrimitive - note that
ToStringis distinct from thetoStringmethod
- eg.
-
when any non-string is coerced to a string representation,
ToStringis used:- builtin primitive values have natural stringification, eg.
nullbecomes"null"- note that very small or large numbers may be represented in exponent form
- for regular objects,
ToStringuses the defaulttoStringwhich returns the internal[[Class]]- unless an object has its own
toStringmethod - eg. arrays have a overridden default
toStringthat stringifies as the string concatention of its values, with a comma between each value- eg.
[1, 2, 3].toString()gives"1,2,3"
- eg.
- unless an object has its own
- builtin primitive values have natural stringification, eg.
-
similarly,
JSON.stringifyis used to serialize a value to a JSON-compatible string value:- an optional second argument acts as a replacer:
- an array or function that handles filtering certain object properties in the JSON
- an optional third argument called space:
- a number of spaces to use for indentation, or a string to replace spaces for indentation
- values that are not JSON-safe have special cases:
JSON.stringifywill automatically omit theundefined, function, and symbol values:- in an array, the value is replaced by null
- if a property of an object, that property is excluded
- attempting to JSON stringify an object with circular references throws an error
- if an object value has a
toJSONmethod defined, this method is called first to get a custom JSON-safe value for serialization
- if an object value has a
- an optional second argument acts as a replacer:
-
when any non-number is coerced to a number,
ToNumberis used:- eg.
truebecomes 1,undefinedbecomesNaN,nullbecomes 0 - for string values,
ToNumberemulates the rules for numeric literals, except if it fails, the result isNaNinstead of an error - for objects and arrays, they are first converted to their primitive value equivalent using
toPrimitive:toPrimitivechecks if the value has avalueOfmethod and if that method returns a primitive value, that is used for coercion- otherwise,
toStringwill provide the value for the coercion, if present - if neither operation can provide a primitive, then an error is thrown
- eg.
-
when any non-boolean is coerced to a boolean,
ToBooleanis used:- note that unlike other languages
1is not identical totrue, and0is not identical tofalse ToBooleancoerces all falsy values tofalseand all other values totrue- the falsy values are:
undefined, null, false, +0, -0, NaN, ""
- all other objects are truthy:
- eg. all objects, even wrappers of falsy primitives
- eg.
"false", "0", "''", [], {}, function(){}are all truthy
- note there are some falsy objects that come from outside of JavaScript:
- eg.
document.allis a falsy object
- eg.
- note that unlike other languages
-
to cast between strings and numbers, the builtin
StringandNumberfunctions can be used, without thenewkeyword:- they use the abstract
ToStringandToNumberoperations defined earlier - other ways of explicit conversion:
- calling
toString(which wraps primitive values in a native first) - using the unary operators
+and-:- special parsing rules prevent confusion with increment and decrement operators
- unary
+can also be used to coerce aDateobject into a number
- calling
- they use the abstract
-
similarly to coercing between strings and numbers, JS supports parsing a number out of a string's contents:
- using
parseIntandparseFloat- the second argument takes the base to parse the number in
- unlike coercion, parsing is tolerant of non-numeric characters and stops parsing when encountered, instead of giving
NaN - these parse methods are designed to work on strings, so when a non string value as an argument is automatically coereced to a string first
- using
Parsing gotchas:
parseInt(1/0, 19); // 18, coerced to "Infinity", I in base-19 is 18
parseInt(0.0000008); // 8, coerced to "8e-7"
parseInt(parseInt, 16); // 15, coerced to "function ...", f in base-16 is 15- to coerce from non-booleans to booleans,
Booleanwithoutnewcan be used:- the unary
!negate operator also explicitly coerced to boolean, while flipping the value- thus the double-negate
!!can also be used to coerce to booleans
- thus the double-negate
- note that implicit boolean coercion would occur in a boolean context such as an
ifor ternary statement
- the unary
- generally, the
+binary operator performs string concatenation if either operand is a string, and otherwise numeric addition:- however, when an object is an operand, it will use the
ToPrimitiveoperation on the object:- which calls
valueOfand thentoStringin an attempt to stringify the operand
- which calls
- however, when an object is an operand, it will use the
Implicit coercion with +:
[1, 2] + [3, 4]; // "1,23,4"
42 + ""; // "42"
"" + 42; // "42"
var a = {
valueOf: function() { return 42; },
toString: function() { return 4; }
};
a + ""; // "42", using ToPrimitive
String(a); // "4"
[] + {}; // "[object Object]"
{} + []; // 0, {} is treated as an empty block- on the other hand
-is only defined for numeric subtraction- same with
*and/
- same with
Implicit coercion with -:
"3.14" - 0; // 3.14
[3] - [1]; // 2, coerced to strings and then numbers
[1, 2] - 0; // NaN-
for implicit coercion of ES6 symbols:
- explicit coercion of a symbol to a string is allowed
- but implicit coercion of a symbol is disallowed and throws an error
- symbol values cannot coerce to numbers either
- symbols do explicitly and implicitly coerce to boolean true
-
implicit coercion to boolean values is the most common form:
- the following expression operations force a boolean coercion:
- the test in an
ifstatement - the test in a
forheader - the test in
whileanddo..whileloops - the test in ternary expressions
- the lefthand operand in
||and&&operations
- the test in an
- the following expression operations force a boolean coercion:
-
unlike the logical operators in other languages,
||and&&in JS work more like selector operators:- rather than returning booleans, these result in the value of one of their operands
- both perform a boolean test (with
ToBooleanif necessary) on the first operand:- for
||, if the test is true, the expression results in the value of the first operand, and otherwise the second - for
&&, if the test is true, the expression results in the value of the second operand, and otherwise the first - both are still performing short-circuiting
- for
- eg.
42 && "abc" === "abc"andnull && "abc" === null - similar to a kind of selecting ternary
Default assignment idiom with ||:
function foo(a, b) {
a = a || "hello";
b = b || "world";
console.log(a, b);
}
foo(); // prints "hello world"
foo("a", "b"); // prints "a b"
foo("c", ""); // prints "c world"Guarding idiom with &&:
function foo() { console.log(a); }
var a = 42;
a && foo(); // prints 42-
JavaScript has two equality operators:
==AKA loose equality allows coercion in the equality comparison, while===AKA strict eqaulity disallows coercion!=is the same as the==comparison, but negated, and similarly for!==with respect to===
-
both equality operators follow the same initial algorithmic comparison steps:
- if the two values are of the same type, they are simply and naturally compared via identity
- exceptions are
NaNnever being equal to itself and+0and-0being equal to each other
- exceptions are
- for all objects, two values are equal only if the are both references to the exact same value
- thus
==and===act the same for two objects - no coercion occurs here
- thus
- the remainder of the algorithm is different for loose equality:
- if the values are of different types, one or both of the values need to be implicitly coerced, so that they end up as the same type
- if the two values are of the same type, they are simply and naturally compared via identity
-
coercion cases for loose equality:
- when comparing strings to numbers:
- the string is implicitly coerced to a number using the
ToNumberoperation
- the string is implicitly coerced to a number using the
- when comparing anything to booleans:
- the boolean is implicitly coerced to a number
- eg. both
"42" == trueand"42 == false"are false! - to test for truthy values, simply use
if (val)to implicit coerce to boolean
- when comparing
nullandundefined:nullandundefinedare always equal and coerce to each other
- when comparing objects to nonobjects:
- the object is implicitly coerced to a primitive using the
ToPrimitiveoperation - eg.
42 == [42]andnew String("abc") == "abc"are true
- the object is implicitly coerced to a primitive using the
- when comparing strings to numbers:
Edges cases from modifying native prototypes:
var i = 2;
Number.prototype.valueOf = function() {
return i++;
};
var a = new Number(42);
if (a == 2 && a == 3) {
console.log("gotcha"); // prints gotcha
}Edge cases in falsy comparisons:
// all true comparisons:
"0" == false;
0 == false;
"" == false;
[] == false;
"" == 0;
"" == [];
0 == [];
[] == ![]; // same as [] == false due to unary !
2 == [2];
"" == [null]; // [null] coerces to ""
0 == "\n" // whitespace strings coerce to 0- coercion cases for relational comparison:
ToPrimitivecoercion is done on both values- if either result is not a string, both values are coerced to numbers and compared numerically
- otherwise, they are compared lexicographically
- note that for
a <= b,b < ais evaluated and negated instead- similary for
a >= b,a < bis evaluated and negated
- similary for
Relational comparisons:
[42] < ["43"]; // true
["42"] < ["043"]; // false, lexicographically compared
[4, 2] < [0, 4, 3]; // false, "4,2" > "0,4,3"
Number([42]) < Number("043") // true, numerically compared
{b: 42} < {b: 43}; // false, both are "[object Object]"
{b: 42} > {b: 43}; // false, both are "[object Object]"
{b: 42} == {b: 43}; // also false, object comparison
{b: 42} <= {b: 43}; // true!
{b: 42} >= {b: 43}; // true!\newpage{}
-
JavaScript operators all have well-defined rules for precedence and associativity:
- eg.
&&has precedence over||, both have precedence over= - note that the statement-series
,operator has the lowest precedence - eg. assignment and ternaries are right-asssociative, while most other operators are left-asssociative
- eg.
-
JavaScript has a feature called automatic semicolon insertion (ASI):
- JS assumes a semicolon in certain places even if omitted
- only in certain places were the JS parser can reasonably insert a semicolon
- useful for
do..whileloops that require a semicolon after - may cause unintended behavior with
return, continue, break, yield
-
function argument nuances:
- there is a TDZ for ES6 default parameter values as well:
- eg.
function foo(a = 42, b = a + b + 1)is invalid, whilefunction foo(a = 42, b = a + 5)is OK
- eg.
- omitting an argument is similar to passing an
undefinedvalue, except:- the builtin
argumentsarray will not have entries if certain arguments are omitted
- the builtin
- there is a TDZ for ES6 default parameter values as well:
-
try..finallynuances:- the
finallyclause always runs, right after the other clauses finish:- but if there is a
returnin atryclause, thefinallyclause runs immediately before exiting from the function - similarly for throwing errors,
continue, andbreak
- but if there is a
- a
returninside afinallycan also override a previousreturn
- the
-
switchstatement nuances:defaultclause is optional- the matching between the cases and main switch expression is identical to the
===algorithm - however, it is possible to still use loose equality with
trueswitch expression- note that we are still explicitly matching
true, so truthy values will fail to match
- note that we are still explicitly matching
Using coercive equality:
var a = "42";
switch (true) {
case a == 10:
...
break;
case a == 42:
...
break;
}\newpage{}
-
scope is the set of rules for storing variables in a location and finding those variables later:
- scoping has some other uses beyond just determining how to lookup variables:
- scoping can be used for information hiding ie. hiding variables and functions
- hiding names also avoids collisions between variables with the same names
- collisions also avoided through use of global namespaces or modules
- for JS, when and how the scoping rules are set depends on its compilation process
- traditional compilation process:
- tokenizing / lexing the source code
- parsing it into a syntax tree
- generating machine code from the syntax tree
- unlike traditional compiled languages, JS is not compiled in advance, it is compiled as the program runs, microseconds before code is executed:
- less time for optimization
- must use tricks such as lazy compilation and hot recompliation to be efficient
- scoping has some other uses beyond just determining how to lookup variables:
-
the JavaScript engine is responsible for start-to-finish compilation and execution:
- calls upon the compiler to parse and generate code
- uses scope in order to retrieve a look-up list of variables and their accessibility rules
- due to nested scope, if a variable is not found in the immediate scope, the engine consults the next outercontaining scope, until the global scope has been reached
- any variable declared within a scope is attached to that scope
- eg. for the statement
var a = 2:- compiler will declare a variable (if not previously declared) in the current scope
- compiler generates code that will be run by the engine that actually looks up the variable in the scope and assigns to it, if found
- note that the lookup that occurs can be for a LHS variable or a RHS variable:
- LHS ie. target variable to assign to, eg.
a = 2 - RHS ie. source of the assignment, eg.
console.log(a)
- LHS ie. target variable to assign to, eg.
- note that scope-related assignments will implicitly occur when assigning to function parameters
-
LHS and RHS lookups are distinct in behavior when the variable has not been declared:
- when a RHS lookup fails to find a variable, anywhere in the nested scope, a
ReferenceErroris thrown by the engine - when a LHS lookup arrives at global scope without finding a variable:
- if the program is not in strict mode, the global scope will create a new variable of that name in the global scope and hand it back to the engine
- in strict mode, implicit global variable creation is disallowed, so a
ReferenceErroris again thrown by the engine
- note that a
ReferenceErrorindicates a scope resolution failure, while other errors at this time indicate scope resolution was succesful, but an illegal action was attempted
- when a RHS lookup fails to find a variable, anywhere in the nested scope, a
-
there are two models of scoping, lexical or static scoping and dynamic scoping:
- with lexical scoping, the scoping rules are defined at lexing time ie. compile time:
- based on where variables and blocks of scopes are authored, using nested scoping rules
- no matter where or how a function is invoked, its lexical scope is only defined by where it was declared
- most programming languages, including JavaScript, use lexical scoping rules
- with dynamic scoping, lookup happens dynamically at runtime:
- eg.
thisin JS is dynamically scoped, since its value depends on how its function was called - eg. Bash scripting, some Perl modes
- eg.
- scope lookup stops once the first match is found, and the same identifier name can be shadowed by inner scopes
- with lexical scoping, the scoping rules are defined at lexing time ie. compile time:
-
JavaScript does provide some ways to dynamically modify its lexical scoping rules:
- can lead to dangerous side effects
- eg. using
eval, or the now deprecatedwithexpression- both of these methods are restricted by strict mode
- both of these methods force the compiler to limit or avoid optimizations, so code will run slower
Changing lexical scope with eval:
function foo(str, a) {
eval(str);
console.log(a, b);
}
var b = 2;
foo("var b = 3;", 1); // prints 1, 3Example of the now deprecated with keyword:
var obj = { a: 1, b: 2, c: 3 };
// tedious reassignment
obj.a = 2;
obj.b = 3;
obj.c = 4;
// with shorthand
with (obj) {
a = 3;
b = 4;
c = 5;
};Changing lexical scope using with:
function foo(obj) {
with (obj) { a = 2; }
}
var o1 = { a: 3 };
var o2 = { b: 3 };
foo(o1);
console.log(o1.a); // prints 2
foo(o2);
console.log(o2.a); // prints undefined
console.log(a); // prints 2, global has been *leaked*
// with keyword creates a new lexical scope, but a is missing,
// so lookup goes to the global level and creates a new declaration (non-strict)- ES6 introduced a new syntactic form of function declaration called the arrow function
- pros:
- the "fat arrow" is a shorthand for the
functionkeyword - performs a lexical binding for
this, rather than following the normalthisbinding rules
- the "fat arrow" is a shorthand for the
- cons:
- arrow functions are all anonymous
- pros:
Illustrating the problem of lexical scope with this:
var obj = {
id: "foo",
identify: function idFn() {
console.log(this.id);
}
}
var id = "bar";
obj.identify(); // prints foo
setTimeout(obj.identify, 100); // prints bar, this binding is lost
// since this is bound dynamically
// explicit fix:
var obj = {
id: "foo",
identify: function idFn() {
var self = this;
setTimeout(function log() { // have to move setTimeout inside
console.log(self.id);
}, 100);
}
}
// bind fix:
var obj = {
id: "foo",
identify: function idFn() {
setTimeout(function log() { // have to move setTimeout inside
console.log(this.id);
}.bind(this), 100);
}
}
// fat-arrow fix:
setTimeout(() => { obj.identify(); }, 100);-
JavaScript
vardeclarations follow function scope where the declarations within a function are effectively hidden from the outside:- ie. follow a scope unit of functions
- there are serveral considerations for functions as scope
-
functions expressions can be anonymous (omitting the name) or named:
- function declarations cannot omit the name
- drawbacks to anonymous functions:
- anonymous functions have no name to display in stack traces
- without a name, the function can only refer to itself through the deprecated
arguments.callee - without a name, code may be less readable or understandable
- note that inline functions can still be named, they are not forced to be anonymous
Anonymous vs. named inline functions:
setTimeout(function() {
console.log("1 sec passed");
}, 1000);
setTimeout(function timeoutHandler() {
console.log("1 sec passed");
}, 1000);- by wrapping a function in parentheses, function expressions and immediately invoked function expressions (IIFE) can be created:
- useful for avoiding polluting the enclosing scope, since the identifier of the function (if named), is found only inthe scope within the IIFE, and is inaccessible outside the IIFE
Variations on IIFEs:
(function() {...})(); // anonymous IIFE
(function() {...}()); // equivalent IIFE
(function IIFE() {...})(); // named IIFE
(function IIFE(global) {...})(window); // passing in arguments to IIFE
(function IIFE(def){ // alternative inverted IIFE definition used in
def(window); // the Universal Module Definition (UMD) project
})(function def(global) {...});-
although functions are the most common unit of scope used in JS, block scoping is another popular scoping unit:
- used by many languages, eg. C/C++, Java, Python
- pros:
- allows for even more information hiding, at a finer granularity within functions
- allows for more efficient garbage collection and faster reclamation of memory
- easier to add additional, explicit scoped blocks (rather than creating new functions)
- JavaScript does provide some facilities for achieving block scope:
with, try/catch, let, const
-
the
withstatement is an example of block scope since the created scope is only within the statement, not the enclosing function -
the variable declaration in the
catchclause of atry/catchis block scoped to thecatchblock -
the
letkeyword, introduced by ES6, attaches the variable declaration to the scope of the containing block, specified by brackets:letdeclarations will also not hoist to the entire scope of the block- when used in blocks, a
letdeclaration in the loop header will actually rebind the variable on each iteration of the loop, which is useful for handling closures - ES6 also added the
constkeyword, which also creates a block-scoped variable whose value is fixed
Using let loops:
for (let i = 0; i < 10, i++) {
console.log(i);
}
console.log(i); // ReferenceError with let instead of var
// let loop rebinding: (equivalent code to let loop)
{
let j;
for (j = 0; j < 10; j++) {
let i = j;
console.log(i);
}
}Polyfilling block scope:
{ // ES6
let a = 2;
console.log(a);
}
console.log(a);
// is polyfilled to:
try {throw 2} catch(a) {
// ES3 catch has block scope!
// alternatively, use an IIFE? isn't an IIFE faster than try/catch?
// IIFE performs faster, but wrapping a function around arbitrary code changes
// the meaning of the code, eg. this, return, break, and continue change meanings
console.log(a);
}
console.log(a);- generally, a JavaScript program is interpreted line-by-line:
- this is mostly true, except for the case of declarations
- the engine will have the compiler compile the code in its entirety (usually) before it interprets ie. runs it:
- part of the compilation phase is to find and associate declarations with their appropriate scopes
- thus all declarations are processed first, before any part of the code is executed
- part of the compilation phase is to find and associate declarations with their appropriate scopes
- ie. declarations are hoisted or moved from where they appear in the flow of the code to the top of the code
- note that only the declarations themselves are hoisted, not any assignments or other executable logic
- thus function expressions are not hoisted
- thus
a = 2andvar a = 2have two distinct statements, one of which (the declaration) is hoisted - note that functions are always hoisted first, then variables
- note that only the declarations themselves are hoisted, not any assignments or other executable logic
- note that declarations appearing inside normal blocks (such as
if-elseblocks) are hoisted to the enclosing scope, instead of being conditional
Illustrating hoisting:
a = 2;
var a;
console.log(a); // prints 2
// declaration is hoisted as:
var a;
a = 2;
console.log(a);
console.log(a); // prints undefined
var a = 2;
// declaration is hoisted as:
var a;
console.log(a);
a = 2;Hoisting in function declarations:
foo(); // prints undefined
function foo() {
console.log(a);
var a = 2;
}
// declarations are hoisted as:
function foo() {
var a;
console.log(a);
a = 2;
}
foo();Hoisting in function expressions:
foo(); // TypeError
bar(); // ReferenceError
var foo = function bar() {...};
// declarations are hoisted as:
var foo;
foo(); // TypeError since foo has no value yet, due to using function expression
bar(); // ReferenceError since name of named function expression is not accessible
// in the *enclosing* scope
foo = function() { var bar = ...self... };Hoisting functions first:
foo(); // prints 3, not 1 or 2
var foo;
function foo() { console.log(1); }
foo = function() { console.log(2); };
function foo() { console.log(3); }
// declaration is hoisted as:
function foo() { console.log(1); }
function foo() { console.log(3); } // subsequent declaration overrides previous one
// var foo is a *duplicate* and thus ignored declaration
foo();
foo = function() { console.log(2); };- note that
letandconstare actually still hoisted:- all JavaScript declarations are hoisted
- the difference is that there is a temporal dead zone between the hoisted declaration and the actual declaration of the variable for ES6 block scoped variables
- accessing a
letorconstbefore they are declared thus throws aReferenceErrorsince they are accessed in this dead zone
- accessing a
Illustrating hoisting of let:
let x = "outside";
(function() {
// x declaration hoisted here, start of TDZ for x
console.log(x); // throws a ReferenceError instead of printing "outside",
// so x *is* hoisted
// TDZ ends
let x = "inner";
})\newpage{}
- closure is when a function is able to remember and access its lexical scope even when that function is executing outside its lexical scope:
- ordinarilly, we would expect the entirety of the scope of a function to go away after execution, when the garbage collector runs:
- however, with closures, this is not the case, and the scope of a returned function can still be accessed
- implemented using nesting links, and placing certain call frames on the heap instead of the stack
- closures happen naturally in JavaScript as a result of writing code that rely on lexical scope:
- whenever an inner function is transported outside of its lexical scope ie. treated as first-class values, it maintains a closure reference to its original lexical scope
- eg. timers, event handlers, AJAX requests, callback functions, etc.
- ordinarilly, we would expect the entirety of the scope of a function to go away after execution, when the garbage collector runs:
Illustrating closures:
function foo() {
var a = 2;
function bar() {
console.log(a);
} // bar has a *closure* over the scope of foo and rest of its accessible scopes
// ie. bar *closes* over the scope of foo, because bar is nested inside foo
return bar;
}
var baz = foo();
baz(); // prints 2, closure in action here, since baz is executed
// *outside* of its declared lexical scopeConcrete closure examples:
function wait(msg) {
setTimeout(function timer() { console.log(msg); }, 1000);
}
wait("Hello!"); // uses closures
function debugButton(name, selector) {
$(selector).click(function activator() {
console.log("activating " + name);
});
}
// uses closures
debugButton("Continue", "#continue");
debugButton("Quit", "#quit");Closure and loops:
for (var i = 1; i <= 5; i++) {
setTimeout(function timer() {
// each timer function is closed over same shared *global* scope,
// due to the declaration of i using var
console.log(i); // when each timer runs after setTimeout triggers, i is 6
// the *desired* functionality is to capture a different copy
// of i at each iteration, ie. a per-iteration block scope
}, i*1000);
} // prints 6 6 6 6 6, one 6 each second
// solving with IIFE:
for (var i = 1; i <= 5; i++) {
(function(j) { // use an IIFE to *create* a new lexical scope within global scope
setTimeout(function timer() {
console.log(j);
}, j*1000);
})(i);
} // prints 1 2 3 4 5
// solving using let:
for (let i = 1; i <= 5; i++) {
// let has per-iteration rebinding
setTimeout(function timer() {
console.log(i);
}, i*1000);
} // prints 1 2 3 4 5- the module code pattern leverages closures in order to reveal a certain public API while hiding implementation details, and requires:
- an outer enclosing function, that must be invoked at least once to create a new module instance
- the enclosing function must return back at least one inner function
- the inner function thus has closure over the private scope
Example module pattern:
function Module() {
var foo = "bar";
var qaz = [1, 2, 3];
function doFoo() { console.log(foo); }
function doQaz() { console.log(qaz.join("!")); }
return {
doFoo: doFoo,
doQaz: doQaz
};
}
var mod = Module();
mod.doFoo(); // prints bar
mod.doQaz(); // prints 1!2!3- ES6 added first-class syntax support for modules:
- each file is treated as a separate module
- modules can import other modules or specific API members, and export their own public API members
- ES6 module APIs are static and thus import errors can be checked at runtime
- a module can:
exportan identifier to the public API for the current moduleimportone or more members from a module's API into the current scope
\newpage{}
- JavaScript's
thismechanism allows functions to be reused against multiple different context objects:- a more elegant mechanism than explicitly passing along an object or context reference as a parameter
- a common misconception of
thisis that it refers to the function itself or to a function's lexical scope:- however, altough
thismay point to a calling function, it does not always do so - there is also no way to use a
thisreference to look something up in a lexical scope- ie. there is no bridge between lexical scopes
- however, altough
thisis a dynamic, runtime binding that is contextual based on the conditions of the function's invocation- the
thisreference is a property on the activation record of the function in the call stack - ie. based on the function's call-site
- the
Utility of this:
function identify() { return this.name; }
function speak() {
var greeting = "Hello, I'm " + identify.call(this);
console.log(greeting);
}
var me = { name: "Bob" };
var you = { name: "Blob" };
identify.call(me); // prints Bob
speak.call(you); // prints Hello, I'm BlobAllowing a reference to get a reference to itself:
function foo(num) {
console.log(num);
this.count++; // not bound to foo!
}
foo.count = 0;
for (let i = 0; i < 10; i++) {
if (i > 5) {
foo(i);
}
}
console.log(foo.count); // prints 0?
// forcing the binding on this to point to foo:
for (let i = 0; i < 10; i++) {
if (i > 5) {
foo.call(foo, i);
}
}
console.log(foo.count); // prints 4- the first binding rule, default binding, applies for standalone function invocation:
- the default, catch-all rule when none of the others apply
- the default binding points
thisat the global object
- the default binding points
- variables declared in the global scope are synonymous with the global object properties of the same name
- note that in strict mode, the global object is not eligible for the default binding, so
thisis instead set to undefined
- the default, catch-all rule when none of the others apply
Default binding:
function foo() {
console.log(this.a);
}
var a = 2;
foo(); // prints 2- in implicit binding, the call-site may have a context ie. owning object:
- the function call is preceeded by an object reference
- implicit binding points
thisto that object
- implicit binding points
- note that only the top or last level of an object property reference chain matters to the call-site
- a problem with implicit binding occurs when an implicitly bound function loses its binding, and falls back to the default binding:
- occurs commonly with function callbacks
- some frameworks will also forcefully modify
thisduring the callback - need a way to fix the
this
- the function call is preceeded by an object reference
Implicit binding:
var obj2 = {
a: 2,
foo: foo // doesn't matter whether foo is defined here or added as a reference
};
var obj1 = {
a: 42,
obj2: obj2
};
obj1.obj2.foo(); // prints 2Implicit binding loss:
function doFoo(fn) {
fn(); // call-site is what matters, fn becomes another reference to foo
}
var obj = {
a: 2,
foo: foo
};
var a = "global";
doFoo(obj.foo); // prints global, not 2- explicit binding forces a function call to use a particular object for the
thisbinding, without putting a property function reference on the object:- uses
callorapply, which both take in an object to use forthisas the first argument - hard binding is a form of explicit binding that fixes the issue of binding loss
- provided by
bindin ES5, which returns a new function that is hardcoded to call the original function withthisspecified context
- provided by
- some APIs will provide an optional context parameter that uses a form of explicit binding to use that context
applyalso helps to spread out an array as parameters (replaced by the ES6 spread operator)bindalso is useful for currying functions
- uses
Explicit binding:
var obj = { a: 2 };
foo.call(obj); // prints 2Hard binding:
var obj = { a: 2 };
var bar = function() {
foo.call(obj); // actual call-site
}
bar(); // prints 2
setTimeout(bar, 100); // also prints 2
bar.call(window); // still prints 2
// simple example hard binding helper:
function bind(fn, obj) {
return function() {
return fn.apply(obj, arguments);
};
}
var bar = bind(foo, obj);
// same functionality provided by ES5 bind function:
var bar = foo.bind(obj);API calls with context:
function foo(el) {
console.log(el, this.id);
}
var obj = { id: "bar" };
[1, 2, 3].forEach(foo, obj); // prints 1 bar 2 bar 3 bar- the
newbinding is a special binding rule that is used with thenewoperator:- note that the
newoperator in JS has no connection to object-oriented functionality - in JS, constructors are just functions that happen to be called with the
newoperator:- not attached to classes, nor are they instantiation a class
- not a special type of function either, more like a construction call of a function
- in a construction call:
- a brand new object is constructed
- the new object is
[[Prototype]]linked - the new object is set as the
thisbinding for that function call - unless the function returns its own alternate object, the function call will automatically return the new object
- note that the
new binding:
function foo(a) {
this.a = a;
}
var bar = new foo(2);
console.log(bar.a); // prints 2-
default binding is the lowest priority rule of the four:
- next, implicit binding has the next lowest priority
- followed by explicit binding, and then
newbinding with the highest priority:- note that
callandapplyoverride abindhard binding - in addition, if
nullorundefinedis passed as a binding parameter, default binding applies instead- a safer alternative may be to pass in an "ghost" object instead that is guaranteed to be totally empty
- eg.
Object.create(null)is "more empty" than{}
- note that
- this may be suprising since the previous hard binding helper does not have a way to override the hard binding, but
newbinding still supercedes it:- this is because the builtin ES5
bindis more sophisticated, and actually checks if the hard-bound function has been called withnewor not - overriding hard binding is useful because it allows for a function that can construct objects with some of its arguments preset from a
bind, while ignoring the previously hard-boundthis- ie. helps with partial application and currying
- this is because the builtin ES5
- note that indirect references to a function can be created, eg. the result value of an assignment expression
- these obey default binding, rather than another type of binding expected from the assignment expression
- an alternative binding rule is soft binding, where a function can still be manually rebound via implicit or explicit binding, but has an alternative if the default binding would otherwise apply
- unlike hard binding, which cannot be manually overrided with implicit binding or explicit binding
-
finally, ES6 introduced a new kind of function that has its own binding rules:
- instead of following standard
thisbinding rules, arrow functions adopt thethisbinding from their enclosing scope - this lexical binding cannot be overriden, even with
new - commonly used with callbacks
- similar in spirit to using
var self = thisto lexically capturethis, vs. usingthis-style binding withbind
- instead of following standard
Arrow function bindings:
function foo() {
return () => {
console.log(this.a);
};
}
var obj1 = { a: 2 };
var obj2 = { a: 3 };
var bar = foo.call(obj1);
bar.call(obj2); // prints 2, not 3, not explicitly rebound\newpage{}
objectin JavaScript is one of its primary types:- a function is a subtype of object, technically a callable object
- arrays are also a structured form of object
- can be created using a literal form, or constructed form
- object have properties that can be set and accessed:
- through
.or[]operator - note that property names are always strings, so other property name types will be coerced to strings
- ES6 adds computed property names, where an expression surrounded by
[]can be used in the key position of an object literal declaration- useful with ES6
Symbols
- useful with ES6
- through
- note that although functions can be a property of an object, these are not exactly methods that are bound to the object like in other languages:
- the function property is simply another reference to the function, even if it was declared and defined within the object
- the only distinction between the references would occur if the function had a
thisreference and an implicit binding was used
- arrays are objects that are numerically indexed:
- as objects, arrays can have additional named properties, without changing its
arr.lengthproperty - note however that property names that coerce to numbers will be treated as numeric indices
- as objects, arrays can have additional named properties, without changing its
- duplicating objects has the issue of shallow vs. deep copies:
- in some situations, deep copies may create an infinite circular duplication, since extra duplications must occur
- while shallow copies will only create new references, instead of additional concrete duplications
- one copying solution is to duplicate JSON-safe objects:
var newObj= JSON.parse(JSON.stringify(obj))- not always sufficient for objects that are not JSON-safe
- ES6 provides a shallow copy function:
var newObj = Object.assign({}, obj)- takes target object, and one or more source objects
- copies enumerable, owned keys to the target via assignment only, and returns the target
- ES6 spread operator does the same, while being slightly more performant
-
ES5 provides proerty descriptors that allow properties to be described with extra characteristics:
Object.getOwnPropertyDescriptor(obj, name)gets the property descriptor forobj.nameObject.defineProperty(obj, name, descriptor)creates or modifies an existing property with the characteristics in descriptor- the descriptor is an object that specifies:
- the
{ value, writable, enumerable, configurable }characteristics - writing to a non-writable property fails and causes an error in strict mode
- a configurable property can be updated by
Object.defineProperty- a non-configurable property also cannot be removed with
delete
- a non-configurable property also cannot be removed with
- enumerable controls whether the property will show up in object-property enumerations such as the
for..inloop orObject.keys- order of iteration over an object's properties is not guaranteed
- thus note that
for..inapplied on arrays gives the numeric indices as well as any enumerable properties Object.getOwnPropertyNamesgives all properties, enumerable or not
- the
-
there are different ways to achieve shallow immutability using ES5:
- combining
writable: falseandconfigurable: falseessentially creates a constant that cannot be changed, redefined, or deleted Object.preventExtensionsprevents an objects from having new properties added to itObject.sealcreates a sealed object, which essentially callsObject.preventExtensionsand also marks existing properties asconfigurable: false- cannot add or delete properties (though existing properties can be modified)
Object.freezecreates a frozen object, which essentially callsObject.sealand also marks existing properties aswritable: false- prevents any changes to the object
- combining
-
in terms of property accesses, the access doesn't just look in the object for a matching propery:
- instead, according to the spec, the code perfomrs a
[[Get]]operation that:- inspects the object for a property of the requested name
- if found, returns the value accordingly
- otherwise,
undefinedis returned instead - note that this is diferent from referencing variables, where a variable that cannot be resolved from lexical scope lookup will give a
ReferenceError
- otherwise,
- to set a property, the code performs a
[[Put]]operation that:- if the property is an accessor descriptor, call the setter
- if the property is not writable, either fail or throw an error
- otherwise, set the value to the existing property
- if property is not yet present, the operation is even more complex
- ES5 introduced a way to override part of these default operations on a per-property level:
- using getters and setters
- when a property has a getter or setter, its definition becomes an accessor descriptor:
- an accessor descriptor does not have
valueandwritablefields - has the additional
setandgetcharacteristics - in contrast to a normal data descriptor property
- an accessor descriptor does not have
- if only a getter is defined, setting the property later will silently fail
- instead, according to the spec, the code perfomrs a
ES5 getters and setters:
var myObj = {
get a() { return 2; }
};
Object.defineProperty(myObj, "b", {
get: function() { return this.a * 2; },
enumerable: true
});
myObj.a; // gives 2
myObj.b; // gives 4-
the
inoperator checks if a property is in an object, or if exists at a higher level of the[[Prototype]]chain object traversal- eg.
("a" in myObj) - note that the
inoperator does not check for values inside a container, just properties
- eg.
-
on the other hand,
myObj.hasOwnPropertychecks if onlymyObjhas the property or not, ignoring the prototype chain- however, it is possible for an object to not link to
Object.prototype, in which case the test will fail- in this case, a more robust check is:
Object.prototype.hasOwnProperty.call(myObj, "a")
- in this case, a more robust check is:
- however, it is possible for an object to not link to
-
the
for..ofloop added by ES6 allows for iterating over the values of objects directly:- however, it requires an iterator object created by a default
@@iteratorfunction- loop then iterates over return values using the iterator object's
nextmethod - iterators act similar to generator functions
- loop then iterates over return values using the iterator object's
- arrays have this function built in, but it can be manually defined for objects
- however, it requires an iterator object created by a default
Defining an iterator for an object:
var myObj = {
a: 2,
b: 3,
[Symbol.iterator]: function() {
var self = this;
var idx = 0;
var ks = Object.keys(self);
return {
next: function() {
return {
value: self[ks[idx++]],
done: (idx > ks.length)
};
}
};
}
};
for (var v in myObj) {
console.log(v, myObj[v]);
} // prints a 2 b 3
for (var v of myObj) {
console.log(v);
} // prints 2 3- the
[[Prototype]]property is an internal property of all objects, which is a reference to another object:- at creation, almost all objects are given a non
nullvalue for this property - different operations use a
[[Prototype]]chain lookup process to find properties:- the default
[[Get]]operation follows the[[Prototype]]link of an object if it cannot find the requested property on the object directly- if no matching property is ever found by the end of the chain, the return result is
undefined
- if no matching property is ever found by the end of the chain, the return result is
- a
for..inloop also lookups all enumerable properties that can be reached via an object's chain- similarly, the
inoperator will check the entire chain of the object for existence of a property, regardless of enumerability
- similarly, the
- the default
- the top of the
[[Prototype]]chain is usually builtinObject.prototype:- this object inclues various common utilities, such as
toString,valueOf, andhasOwnProperty
- this object inclues various common utilities, such as
- at creation, almost all objects are given a non
Illustrating object chain lookups:
var foo = { a: 2 };
var bar = Object.create(foo); // create object linked to foo
bar.a; // gives 2
for (var k in bar) {
console.log(k);
} // prints a
("a" in bar); // gives true- shadowing occurs when a property name ends up both on an object and a higher level of the prototype chain starting at that object:
- the property directly on the object shadows the other property
- thus there are three scenarios for an assignment
obj.foo = "bar"whenfoois at a higher level of the prototype chain:- if a normal data accessor property is higher in the chain, and it is not read-only, then a new
fooproperty is added directly toobj, resulting in a shadowed property - if
foois higher in the chain but it is read-only, then the setting of the existing property as well as the creation of the shadowed property onobjare disallowed and silently fails - if
foois higher in the chain and it is a setter, the setter will always be called- no new property is shadowed on
obj, and the setter is not redefined
- no new property is shadowed on
- however, in cases 2 and 3,
Object.definePropertycan still be used to shadow a property
- if a normal data accessor property is higher in the chain, and it is not read-only, then a new
\newpage{}
-
although JavaScript has some class-like syntactic elements such as
newandinstanceof, JS does not actually have classes:- however, since classes and object-oriented design are design patterns, it is possible to implement approximations for classical class functionality
- under the surface, these class approximations are not the same as the classes in other languages
- in traditional classes, inheritance and polymorphism are both achieved using some sort of copy behavior:
- ie. child class really contains a copy of its parent class, rather than having some sort of referential relative link to its parent
-
JavaScript's object mechanism does not automatically perform copying behavior when you inherit or instantiate:
- since there are no classes in JavaScript to instantiate or inherit from, only objects
- this missing behavior is emulated using explicit and implicit mixins
- mixins are one way to achieve class-like behavior
-
since JS does not provide a way to copy behavior ie. properties from another object:
- we can create and use a utility that manually copies these properties, usually called
extendormixinby libraries and frameworks- this mixin approach mixes in the nonoverlapping contents of two objects
- ie. explicit mixin
- pros:
- achieves an approximation of inheritance and polymorphism
- can partially emulate multiple inheritance by mixing in multiple objects
- cons:
- the objects still operate separately due to the nature of copying
- eg. adding properties to one of the objects does not affect the other after the mixin
- JS functions cannot really be duplicated, so a duplicated reference is created instead
- if one of the shared function objects is modified, both objects would be affected via the shared reference
- the objects still operate separately due to the nature of copying
- in the similar parasitic mixin pattern, we initially make a copy of the definition from the parent class ie. object, and then mix in the child class
- we can create and use a utility that manually copies these properties, usually called
-
JS did not support a facility for relative polymorphism (prior to ES6):
- thus explicit pseudopolymorphism is used in the miin in the statement
Vehicle.drive.call(this) - absolutely rather than relatively make a reference to the
Vehicleobject - cons:
- this pseudopolymorphism creates brittle, manual linking which is very difficult to maintain when compared to relative polymorphism
- thus explicit pseudopolymorphism is used in the miin in the statement
Mixin utility:
function mixin(src, target) {
for (var key in src) {
if (!(key in target)) {
target[key] = src[key];
}
}
return target;
}
var Vehicle = {
engines: 1,
ignition: function() {...},
drive: function() {...}
};
var Car = mixin(Vehicle, {
wheels: 4,
drive: function() {
Vehicle.drive.call(this);
...
}
});- implicit mixins are also closely related to explicit pseudopolymorphism:
- essentially borrows functionality from another object's function and calls it in the context of another object
- once again, mixes in behavior from two objects
- exploiting
thisbinding rules - still a explicit, brittle call that cannot be made into a more flexible relative reference
- essentially borrows functionality from another object's function and calls it in the context of another object
Example of implicit mixins:
var Foo = {
qaz: function() {
this.count = this.count ? this.count+1 : 1;
}
}
Foo.qaz();
Foo.count; // gives 1
var Bar = {
qaz: function() {
Foo.qaz.call(this);
}
}
Bar.qaz();
Bar.count; // gives 1, not shared state with Foo- all functions in JavaScript by default get a public, nonenumerable property called
prototype, which points at an arbitrary object:- each object created from calling
new Obj()will end up prototype-linked to theObj.prototypeobject - this behavior is similar to the copying of behavior that occurs when instantiating traditional classes:
- but no copying in JS, instead creates links between objects
- this mechanism is prototypal inheritance, the dynamic version of clasical inheritance
- not quite inheritance, since inheritance implies copying, but rather delegation, where an object can delegate properties and function access to another object
- ie. delegating behavior to another object upwards the prototype chain
- the
prototypeobject of each function also has a.constructorproperty that points to the function:- although this
.constructorproperty will appear on newly created objects due to the chain lookup, this property does not necessarily indicate which function constructed the object - ie. constructor does not mean constructed by
- although this
- each object created from calling
Common misunderstandings of using "classes" in JS:
function Foo(name) {
this.name = name;
}
Foo.prototype.myName = function() {
return this.name;
};
Foo.prototype.constructor === Foo; // true, builtin property of prototype
// the myName property on the Foo.prototype is *not* being copied over
// but *linking* occurs, and Object.getPrototypeOf(a) === Foo.prototype
var a = new Foo("a");
var b = new Foo("b");
// this lookup follows the prototype chain to Foo.prototype
a.myName(); // gives a
b.myName(); // gives b
a.constructor === Foo; // true, but *only* due to following the prototype chain
b.constructor === Foo; // true, """Illustrating .constructor nuances:
function Foo() {...}
Foo.prototype = {...} // creating a new prototype object, missing .constructor
var a = new Foo();
a.constructor === Foo; // false
a.constructor === Object; // true, delegated all the way to Object.prototype- prototypal inheritance uses
Object.createto create a new prototype object that is linked to another prototype object:- if simple assignment was used eg.
Bar.prototype = Foo.prototype:- copies the reference to the prototype object, so that modifying it changes the now shared prototype object
- defeats goal of inheritance
- if
Bar.prototype = new Foo()was used instead:- does in fact create a new object that is linked to
Foo.prototype - however, this may have side effects from using the constructor call
- this
Fooconstructor call should be called later when theBardescendants are created
- this
- does in fact create a new object that is linked to
- in ES6, should use
Object.setPrototypeOf(Bar.prototype, Foo.prototype)in order to modify the existing prototype object
- if simple assignment was used eg.
Using prototypes to create delegation links that emulate inheritance:
function Foo(name) {
this.name = name;
}
Foo.prototype.myName = function() {
return this.name;
};
function Bar(name, label) {
Foo.call(this, name);
this.label = label;
}
// new Bar.prototype linked to Foo.prototype,
// Bar.prototype.construtor is gone!
Bar.prototype = Object.create(Foo.prototype);
Bar.prototype.myLabel = function() {
return this.label;
};
var a = new Bar("a", "obj a");
a.myName(); // gives a
a.myLabel(); // gives obj a- object relationships can be tested using:
- the
instanceofoperator which takes a plain object and a function:- eg.
a instanceof Banswers whether in the entire prototype chain ofa, does the object pointed to byB.prototypeever appear
- eg.
- the
obj.isPrototypeOffunction:- eg.
a.isPrototypeOf(b)answers whetheraappears anyhere in the prototype chain ofb
- eg.
Object.getPrototypeOfdirectly retrieves the[[Prototype]]of an objectobj.__proto__is an alternate way to access the internal[[Prototype]]- standardized in ES6, actually a getter and setter
- the
Object.createcreates a new object linked to the specified object:- gives power of delegation of the
[[Prototype]]mechanism - without unnecessary complications of
.prototypeand.constructorreferences, etc. Object.create(null)creates an object that has an empty prototype link:- thus object cannot delegate anywhere
- no prototype chain, so
instanceofalways returns false - these objects are dictionaries that can be used purely for storing data
Object.createsupports additional functionality in its second argument:- the second argument specifies property names to add to the newly created object via their property descriptors
- gives power of delegation of the
Polyfilling basic Object.create functionality:
if (!Object.create) {
Object.create = function(o) {
function F(){}
F.prototype = o;
return new F();
};
}- because JavaScript does not use traditional copy-base inheritance, it may be more appropriate to use a delegation-oriented design rather than a object (prototypal)-oriented design:
- in traditional OOP, child tasks inherit from a parent class, and then add or override functionality, creating specialized behavior
- in delegated design, rather than composing related objects together through inheritance, related objects are kept as separated objects, and instead one object will delegate to the other when needed
- ie. objects are peers of each other and delegate among themselves, rather than having parent and child relationships
- note that JS disallows creating a cycle where two or more objects are mutually delegated to each other
Class-based vs. delegation-based design:
// class-based approach (in another language):
class Task {
id;
Task(ID) { id = ID; }
output() { print(id); }
}
class LabeledTask inherits Task {
label;
LabeledTask(ID, Label) { super(ID); label = Label; }
output() { super.output(); print(label); }
}
// vs. delegation in JS:
Task = {
setID: function(ID) { this.id = ID; }
output: function() { console.log(this.id); }
};
LabeledTask = Object.create(Task);
LabeledTask.prepareTask = function(ID, Label) {
this.setID(ID);
this.label = Label;
};
LabeledTask.outputTaskDetails = function() {
this.outputID();
console.log(this.label);
};
// note that both data members are data properties on the delegator (LabeledTask),
// not on the delegate (Task), due to the this-bindingAnother prototypal vs. delegation example:
// prototypal approach in JS:
function Foo(who) {
this.me = who;
}
Foo.prototype.identify = function() {
return "I am " + this.me;
};
function Bar(who) {
Foo.call(this, who);
}
Bar.prototype = Object.create(Foo.prototype);
Bar.prototype.speak = function() {
return "Hello " + this.identify();
};
var b1 = new Bar("b1");
var b2 = new Bar("b2");
b1.speak(); // gives Hello I am b1
b2.identify(); // gives I am b2
// vs. delegation in JS:
Foo = {
init: function(who) {
this.me = who;
},
identify: function() {
return "I am " + this.me;
}
};
Bar = Object.create(Foo);
Bar.speak = function() {
return "Hello " + this.identify();
};
var b1 = Object.create(Bar);
b1.init("b1");
var b2 = Object.create(Bar);
b2.init("b2");
b1.speak(); // gives Hello I am b1
b2.identify(); // gives I am b2- type introspection has to do with inspecting an instance to find out what kind of object it is:
- in JS, introspection different depending on whether an prototypal or delegation-based approach is taken
Prototypal vs. delegation introspection:
// with prototypal design:
function Foo() {...}
Foo.prototype...
function Bar() {...}
Bar.prototype = Object.create(Foo.prototype);
var b = new Bar();
// all true tests:
Bar.prototype instanceof Foo;
Foo.prototype.isPrototypeOf(Bar.prototype);
b1 instanceof Foo;
b1 instanceof Bar;
Foo.prototype.isPrototypeOf(b1);
Bar.prototype.isPrototypeOf(b1);
// with delegation design:
var Foo = {...};
var Bar = Object.create(Foo);
Bar...
var b = new Bar();
//all true tests:
Foo.isPrototypeOf(Bar);
Foo.isPrototypeOf(b);
Bar.isPrototypeOf(b);- another common introspection method is to use duck typing:
- simply check that an object has a capability, instead of testing for its type
- can be a more brittle and risky test, eg. ES6 promises assume unconditionally that an object with a
thenmethod is a promise
Duck typing:
if (a.duckWalk && a.duckTalk) {
a.duckWalk();
a.duckTalk();
}- ES6 introduced new syntax to make class-based inheritance in JavaScript cleaner:
- note that this class mechanism is still using the existing JS delegation mechanism
- not traditional copy-based inheritance
- pros:
- fewer references to
.prototype - new, more natural
extendskeyword- can extend on natives, such as arrays or error objects
- provides
superfor relative polymophism constructormethod
- fewer references to
- cons:
- no way to declare class member properties (only methods)
- requires
.prototypesyntax
- requires
- accidental shadowing can occur
- some issues with dynamic
superbindings
- no way to declare class member properties (only methods)
- note that this class mechanism is still using the existing JS delegation mechanism
ES6 class example:
class Widget {
constructor(width, height) {
this.width = width || 50;
this.height = height || 50;
this.$elem = null;
}
render($where) {
if (this.$elem) {
this.$elem.css({
width: this.width + "px",
height: this.height + "px"
}).appendTo($where);
}
}
}
class Button extends Widget {
constructor(width, height, label) {
super(width, height);
this.label = label || "Default";
this.$elem = $("<button>").text(this.label);
}
render($where) {
super($where);
this.$elem.click(this.onClick.bind(this));
}
onClick(evt) {
...
}
}\newpage{}
-
asynchronous programming is an important of JavaScript:
- programs are written in chunks, some of which will execute now and some of which will execute later:
- code that should be executed later introduces asynchrony into the program
- eg. making an AJAX request, or even I/O like
console.logmay be deferred and completed asynchronously
- there are different ways to specify what JS code should run later, eg. on completion of another event:
- callbacks, promises, generators, etc.
- programs are written in chunks, some of which will execute now and some of which will execute later:
-
a key JavaScript feature is the event loop:
- JS itself does not actually have a direct notion of asynchrony
- the event loop handles executing different chunks of the program over time
- different handlers can be registered for certain events, so that these handlers run when the events occur:
- unlike normal synchronous code, these events can occur asynchronously ie. at any time
- eg. when a
setTimeouttimer fires, it places the callback into the event loop- thus
setTimeouttimers may not fire with perfect accuracy depending on the current queue of events on the loop
- thus
- different language structures used for asynchronous functions include callbacks, promises, async/await, and generators
- eg. a common functionality that is handled using asynchronous functions are AJAX requests:
- Async JS and XML (AJAX) requests communicate with a server using an HTTP request, without having to reload the current page
- ie. retrieving XML (or more recently, JSON) data asynchronously using JS
- note that JavaScript (and the event loop) runs on a single thread:
- so functions are executed atomically, ie. run-to-completion behavior
- however there is still nondeterminism in the ordering of asynchronous events:
- eg. two AJAX requests may each complete and call their callbacks at arbitrary times with respect to the other
- conditional completion checks or latches can be used to make such behavior deterministic
- this single threaded event loop still offers concurrency:
- although only one event can be handled at a time, sequentially, on the event loop, multiple tasks or "processes" may simultaneously be pushing events onto the event loop
- these events may become interleaved with one another
- this allows for concurrency ie. task-level parallelism, as opposed to operation-level parallelism through multithreading
- JS itself does not actually have a direct notion of asynchrony
-
ES6 added a new concept layered on top of the event loop queue called the Job queue:
- this queue is an additional event queue, but has higher priority than the event loop queue
- using callbacks is the most basic method of writing asynchronous event handlers:
- pros:
- simple, making use of continuation passing style (CPS)
- used in other JS language structures, eg. synchronous functional callbacks such as
forEach,map,filter, etc.
- cons:
- can quickly lead to "callback hell" or the "pyramid of doom", where callbacks that should be executed in succession become deeply nested and cluttered
- in addition to the cluttered nesting, callback hell has the issue of hardcoded brittle behavior due to the difficulty of tracing the possible paths of execution
- another issue of inversion of control since we are delegating control to usually a third-party library, and only specifying a callback
- leads to many special cases to handle, eg. callback may be called too early, too late, or multiple times, or callback may swallow errors, etc.
- ie. callbacks express asynchronous flow in a nonlinear, nonsequential way
- can quickly lead to "callback hell" or the "pyramid of doom", where callbacks that should be executed in succession become deeply nested and cluttered
- some possible extensions on callbacks that help with some issues:
- split callbacks for success and error
- error-first callback style where the callback accepts an error argument as the first argument
- always make sure callbacks are predictably asynchronous
- pros:
Using callbacks in vanilla JS and jQuery:
// vanilla JS request:
var http = new XMLHttpRequest();
http.onreadystatechange = function() { // callback
// 4 different ready states while request is loading
if (http.readyState == 4 && http.status == 200) {
console.log(JSON.parse(http.response));
}
};
http.open("GET", "data/tweets", true);
http.send();
// jQuery alt:
$.get("data/tweets", function(tweets) { // callback
console.log(tweets);
});Illustrating callback hell:
$.get("data/topTweets", function(topData) { // callback
$.get("data/tweets/" + topData[0].id, function(tweet) {
$.get("data/users/" + tweet.userId, function(userData) {
console.log(userData);
})
})
});-
promises are an alternative to callbacks for asynchronous programming, and an easily repeatable mechanism for encapsulating future values:
- promises are objects that represent actions that haven't yet finished
- promises are then chained using the
.thenproperty in order to specify how data should be handled after it is finished retrieving- the
.catchproperty is used to handle errors, at any point in the promise chain, even in callbacks- alternatively,
.thenalso takes a second argument to handle rejection from the chained promise - ie.
.thentakesfulfilledandrejectedcallbacks as arguments
- alternatively,
- control is uninverted from the callback pattern, since the async function is unaware of other code subscribing to its events
- instead, the control goes back to the calling code when the event handlers are run
- the
- once a promise has been resolved, it becomes immutable:
- this makes it safe to pass the value around, ie. if multiple parties are observing the resolution of a promise, one party cannot affect another party's ability to observe the resolution
- important aspect of promise design
- pros:
- sequential callbacks are no longer deeply nested
- promises can be easily chained together asynchronously
- elegant error catching
- easy to handle create and use multiple promises
- uninverts the inversion of control since we are not handing off the continuation of the program to a third party
- cons:
- syntax is still a little unnatural, is there a way to make async code look more similar to synchronous code?
- still some issues with error handling, ie. no external way to guarantee to observe all errors
- eg. simply catching the end of a promise chain may not catch all errors since any step in the chain may perform error handling already
- promises only have a single fullfilment value
- usually solved with a value wrapper, or splitting values into dfferent promises
- promises can only be resolved once, eg. what about events or streams of data?
- promises are uncancelable
- note that the ES6 promise implementation uses duck typing to identify promises:
- a thenable is any object with a
.thenmethod - thenables will be treated with special promise rules, even if they were not intended to be treated as a promise
- a thenable is any object with a
-
promise patterns:
Promise.allis used to initialize multiple asynchronous requests at once:- order doesn't matter, just wait on all the async tasks to finish
- takes an array of promises, and returns a promise that fullfils to an array of each fullfilment message of the passed promises, in order
- main returned promise is fulfilled only if all the constiutent promises are also fulfilled
Promise.raceacts as a latch pattern or promises:- takes an array of promises, but only resolves with a single value of the first resolved promise
- an empty array will never resolve
- also rejects if any promise resolution is a rejection
- takes an array of promises, but only resolves with a single value of the first resolved promise
- also
Promise.none,Promise.any,Promise.first,Promise.last
Implementing a promise over a vanilla JS callback:
function get(url) {
return new Promise(function(resolve, reject){
// resolve applies to the .then function,
// while reject should fall to the .catch function (passing the error code)
var xhhtp = new XMLHttpRequest();
xhttp.open("GET", url, true);
xhttp.onload = function() {
if (xhttp.status == 200) {
resolve(JSON.parse(xhttp.response));
} else {
reject(xhttp.statusText);
}
};
xhttp.onerror = function() {
rejext(xhttp.statusText);
};
xhttp.send();
});
}Using promises:
get("data/topTweets")
.then(function(topData) {
return get("data/tweets/" + topData[0].id);
}).then(function(tweet) { // chaining promises
return get("data/users/" + tweet.userId);
}).then(function(userData) {
console.log(userData);
}).catch(function(error) {
console.log(error);
});Using Promise.all to wait for multiple promises concurrently:
const p1 = Promise.resolve("hello");
const p2 = 10;
const p3 = new Promise((resolve, reject) => {
setTimeout(resolve, 1000, true);
});
const p4 = new Promise((resolve, reject) => {
setTimeout(resolve, 3000, 'goodbye');
});
Promise.all([p1, p2, p3]).then(values =>
console.log(values);
); // runs all the promises, values is ["hello", 10, true, "goodbye"] after 3 sec-
addressing the previous issues of trust from callbacks:
- callback called too early:
- ie. task finishes synchronously and sometimes asynchronously, leading to race conditions
- promises by definition are not susceptible to this, since even immediately resolved promises cannot be observed synchronously
- ie. the callback provided to
.thenis always called asynchronously, even if the promise is already resolved
- callback called too late:
- when a promise is resolved, all registered callbacks on it will be called in order
- nothing happening inside those callbacks can delay the calling of the other callbacks
- callback never called:
- nothing can prevent a promise from notifying its resolution
- even if a promise never gets resolved, there is a provided
racemechanic that prevents the promise from hanging indefinitely
- callback called multiple times:
- promises can only be resolved once, and become immutable
- failing to pass along parameters:
- promises still resolve even when called with no explicit value
- value is resolved as
undefined
- value is resolved as
- promises still resolve even when called with no explicit value
- errors and exceptions becoming swallowed:
- if an exception occurs while a promise is being resolved, the exception is caught and forces the promise to become rejected
- ie. promises turn even JS exceptions into asynchronous behavior, whereas they were previously caused a synchronous reaction
- however, note that if there is an exception in the registered callback, it is not caught by the rejection handler, since promises are immutable once resolved
- callback called too early:
-
it is important to note that promises do not replace callbacks, instead we pass a callback onto a promise:
- but how can we guarantee trust and that the promise itself is really a genuine promise?
- ES6 promises also provides
Promise.resolve:- passing an immediate, non-promise, non-thenable value to
Promise.resolvereturns a promise that fulfills to that value - passing a genuine promise to
Promise.resolvesimply returns the same promise - passing a non-promise, thenable value to
Promise.resolvewill unwrap the value until a concrete, final, non-promise value is extracted
- thus the return value from
Promise.resolveis always a real promise, - way to generate trust
- passing an immediate, non-promise, non-thenable value to
Promise.rejectcretes an already rejected promise, without unwrapping
-
generators are functions that can be paused and resumed:
- a newer ES6 feature
- generators are originally from Python
- typically used for lazy evaluation
- breaks from the ordinary JS run-to-completion behavior
- the
yieldkeyword can be used for bidirectional message passing- can also be used for obtaining synchronous-like return values from async function calls
- as well as synchronously catching errors from those async calls
- can also
throwerrors into as well as from generators - using generators is another method for expression asynchronous flow control
- a newer ES6 feature
-
note that to run a generator, an iterator is first created:
- thus multiple instances of the same generator can run at the same time
- iterator aside:
- an iterator is an interface for stepping through a series of values from a producer
- calls
nexteach time you want the next valuenextreturns{ done, value }
- the
for..ofloop can be used to consume a standard iterator - an iterable is an object that contains an iterator
iterable[Symbol.iterator]()creates the iterator
- arrays have default iterators that go over their values
- note that a generator is not technically an iterable, executing a generator returns an iterator
- its iterator is also an iterable
- thus we can use a
for..ofloop with a generator asfor (var v of generator()) ... - can exit from a generator using
it.return(val)
-
transpilation of ES6 generators to pre-ES6 code can be done with a closure-based solution that keeps track of state of the "generator":
- each state represents the different generator states between
yieldcalls
- each state represents the different generator states between
Generator example:
function* gen(index){
while (index < 2)
yield index++;
return 42;
}
var it = gen(0); // construct an iterator
var x = it.next(); // object x has attributes value 0 and boolean done
var y = it.next(); // y has value 1 and done false
var z = it.next(); // z has value 42 and done trueUsing yield for message passing with generators:
function* foo(x) {
var y = x * (yield "Hello"); // two-way message passing!
return y;
}
var it = foo(6);
var res = it.next();
res.value; // gives Hello
res = it.next(7); // pass 7 in to yield
res.value; // gives 42Hardwiring iterator control for generators with promises:
function* main() {
try {
var text = yield get(url);
console.log(text);
} catch (err) {
console.log(err);
}
}
var it = main();
var promise = it.next().value;
promise.then(
function(text) {
it.next(text);
},
function(err) {
it.throw(err);
}
);Generators with promises using a generator runner:
function genWrap(generator){
var gen = generator();
function handle(yielded){
if(!yielded.done){
yielded.value.then(
function(data){
return handle(gen.next(data));
},
function(err) {
gen.throw(err);
}
);
}
}
return handle(gen.next());
}
genWrap(function*(){
var top = yield get("data/topTweets");
var tweet = yield get("data/tweets/" + top[0].id);
var user = yield get("data/users/" + tweet.userId);
console.log(user);
});Concurrency with generators:
genWrap(function*(){
var p1 = get(url1);
var p2 = get(url2);
// p1 and p2 are made in parallel
var r1 = yield p1;
var r2 = yield p2;
// more parallel requests
var rest = yield Promise.all([...]);
// p3 gated until after all previous promises complete
var r3 = yield get(...);
console.log(r3);
});- the keyword
yield*performs yield-delegation:- this allows generators call another generator, and integrate into each other
- allows for cleaner, more modularized generator code
- delegation also allows for more complex message passing and even recursive behavior with generators
- ie. transfers or delegates the iterator control over to another iterable (not necessarily just another generator)
- this allows generators call another generator, and integrate into each other
Yield-delegation example:
function* foo() {
yield 2;
yield 3;
}
function* bar() {
yield 1;
yield* foo(); // yield-delegation here
yield 4;
}
var it = bar();
it.next().value; // 1
it.next().value; // 2
it.next().value; // 3
it.next().value; // 4- async/await is a modern syntactical sugar for promises:
- ie. an syntactical extension on promises, still using promises under the surface
- essentially using generators, with even less clutter
- adopted in other languages, such as Python's
asynciolibrary - the
awaitkeyword awaits the resolution of a promise- can only be used within an
asyncfunction
- can only be used within an
- pros:
- cleaner code than promises, async code that looks synchronous
- cons:
- a
try-catchblock is the only way to catch errors
- a
- ie. an syntactical extension on promises, still using promises under the surface
Using async/await:
async function getTopUser() {
try {
const topData = await get("data/topTweets"); // alternative to .then syntax
const topTweet = await get("data/tweets/" + topData[0].id);
const userData = await get("data/users/" + topTweet.userId);
console.log(userData);
} catch (error) {
console.log(error);
}
}\newpage{}
- the document object model (DOM) is the data representation of the objects that comprise the structure and content of a document on the web:
- ie. a programming interface for HTML documents that represents the page as nodes and objects
- whenever a script is created, the API for
documentorwindowelements can be used to manipulate the document
-
the
Documenttype corresponds to the root document object itself:- properties:
body, fonts, images, cookie, location - methods:
createElement, getElements..., querySelector, addEventListener
- properties:
-
every object within a document is of type
Nodeof some kind:- eg. an element, text, or attribute node
- properties:
nodeType, nodeValue, textContent - linking properties:
firstChild, nextSibling, childNodesparentNode, parentElement- note that
childNodeswill contain all nodes, including text or attributes nodes
- methods:
appendChild, removeChild, replaceChild, hasChildNodes
-
the
ParentNodetype contains methods and properties common to all node objects with children:- eg. for element or documents objects, returned by
node.parentNode - properties:
childElementCount, children, firstElementChild- note that the
childrenproperty returns anHTMLCollectionwith only element children, rather than theNodeListreturned bychildNodes- in addition, the
HTMLCollectionis a live object that is automatically updated when the underlying object is changed
- in addition, the
- note that the
- methods:
append, querySelector, replaceChildren
- eg. for element or documents objects, returned by
Recursing through child nodes:
function eachNode(root, cb) {
if (!cb) { // just return node list
const nodes = [];
eachNode(root, function(node) {
nodes.push(node);
});
return nodes;
}
if (!callback(root)) {
return false;
}
if (root.hasChildNodes()) {
const nodes = rootNode.childNodes;
for (let i = 0; i < nodes.length; i++) {
if (!eachNode(nodes[i], cb)) {
return;
}
}
}
}-
the
Elementtype is based on nodes, and refers to a node of type element returned by the DOM API- inherits from its node interface as well as implementing a more advanced element interface
- properties:
attributes, classList, className, innerHtml, styleand many more - methods:
addEventListener, getElements..., scrolland many more - in an HTML document, the
HTMLElementfurther extends this type:- offers methods such as
blur, click, focus
- offers methods such as
-
a
NodeListis an array of elements:- eg. array returned by
querySelectorAll- note that
getElements...returns anHTMLCollectioninstead of a node list
- note that
- items can be accessed via
list[idx]orlist.item(idx)
- eg. array returned by
-
a
NamedNodeMapis like an array of nodes, but items are accesed by name or index -
Attributenodes are object references that expose a special interface for attributes:- nodes just like elements, but more rarely used
\newpage{}
-
block scoping:
- introduced
letandconstfor block scoping- note the unique redeclaraton of
letvariables in a loop, useful for closures - note that
constfreezes the assignment of a value, not the value itself
- note the unique redeclaraton of
- as well as the temporal deadzone for accessing them early
- introduced
-
the spread or rest operator
...:- when used in front of any iterable, it spreads it out into individual values
- can also be used to gather a set of values into an array, usually in function arguments
- used in different contexts such as function arguments, inside another array declaration, etc.
- eg.
foo(...[1,2,3])is a replacement forfoo.apply(null, [1,2,3]) - eg.
function foo(...args)gathers all the arguments into the arrayargs
- when used in front of any iterable, it spreads it out into individual values
-
default parameter values for functions:
- eg.
function foo(x = 11, y = 31) - default values can be more than simple values:
- can be any valid expression, even a function call or IIFE
- eg.
-
destructuring or structured assignment:
- new dedicated syntax for array and object destructuring
- eg.
var [a,b,c] = foo(),var {x:a, y:b, z:c} = bar(), or alsovar {x,y,z} = bar()- note that when destructuring, the object literal follows the
<target>: <source>pattern rather than the opposite for declarations
- note that when destructuring, the object literal follows the
- extensions on destructuring:
- destructuring returns the right hand value, so destructuring assignments can be chained together
- values can be discarded eg.
var [,b] = [1,2]:- destructuring missing values will become undefined
- the spread operator can be used to gather together elements, eg.
var [a, ...rest] = [1,2,3] - can also use
=to set default value assignment
- destructuring can also be used with parameter assignment in functions
Using expressions in destructuring:
var foo = [1,2,3];
var bar = {x:4, y:5, z:6};
var key = "x", o = {}, a = [];
({[key]: o[key]} = bar); // quotes needed to prevent from parsing {} as block
console.log(o.x); // prints 4
({x: a[0], y: a[1], z: a[2]} = bar);
console.log(a); // prints [4,5,6]
[o.a, o.b, o.c] = foo;
console.log(o.a, o.b, o.c); // prints 1 2 3
var x = 10, y = 20;
[y, x] = [x, y];
console.log(x, y); // prints 20 10Chaining destructuring assignments:
var a, b, c, x, y, z;
[a, b] = [c] = foo;
({x} = {y, z} = bar);
console.log(a, b, c); // prints 1 2 1
console.log(x, y, z); // prints 4 5 6Default value assignment:
var [a=3, b=4, c=5, d=6] = [1,2,3];
console.log(a, b, c, d); // prints 1 2 3 6
var {x, y, z, w: WW = 20} = {x:4, y:5, z:6};
console.log(x, y, z, WW); // prints 4 5 6 20Destructuring parameter gotcha:
function foo({x = 10} = {}, {y} = {y:10}) {
console.log(x, y);
}
foo(); // prints 10 10
foo({}, undefined); // prints 10 10
foo({}, {}); // prints 10 undefined
foo(undefined, {}); // prints 10 undefined
foo({x:2}, {y:3}); // prints 2 3-
object literal extensions:
- concise properties:
- to define a property in an object that is the same name as an identifier, can shorten from
x: xto justx - similarly for methods (and generators) in objects, can shorten from
x: function() {...}to justx() {...}- note however that this makes the function expression anonymous, which may have issues with recursion
- in such cases, it is safer to write out the full expression
x: function x() {...}
- to define a property in an object that is the same name as an identifier, can shorten from
- computed property name:
- an object literal definition can use an expression to compute the assigned property name
- concise properties:
-
objects and prototypes:
- new
Object.setPrototypeOf, and newsuper- note that
supercan only be used in concise methods
- note that
- new
-
template literals ie. interpolated strings:
- similar to f-strings in Python
- interpolated strings are still type string, except they act like IIFEs in that they are automatically evaluated inline
- note that any valid expression can appear in an interpolated expression
- eg.
`hello ${name}!`
-
tagged template literals:
- a special function call without parentheses
- the function receives:
- a first argument of all the plain strings (between interpolated expressions)
- the remaining arguments that are the results of the evaluated interpolated expressions
Example tagged template literals:
function foo(strings, ...values) {
console.log(strings, values);
}
var desc = "awesome";
foo`Everything is ${desc}!`; // prints ["everything is ", "!"] ["awesome"]
// example function to collapse a template literal
function tag(strings, ...values) {
return strings.reduce(function(s, v, idx) {
return s + (idx > 0 ? values[idx-1] : "") + v;
}, "");
}
tag`Everything is ${desc}!`; // gives "Everything is awesome!"-
arrow functions:
- new more concise syntax for arrow expressions with the fat arrow
- lexically binds
this- replaces the
var self = thisand.bind(this)fixes to bindthis
- replaces the
- note that all arrow functions are anonymous function expressions
-
for..ofloops that loops over the values produced by an iterator- standard builtin types that provide iterables are arrays, strings, generators, and collections
-
also, extended Unicode support, more tricks for regular expressions, and a new primitive
symboltype
- iterators are structured patterns from producing information from a source, one-at-a-time:
- the
Iteratorinterface requires thenextmethod that returns anIteratorResult- as well as optional
returnandthrowmethods to end production of values by iterator
- as well as optional
IteratorResulthas two required propertiesvalue(undefinedif missing) and booleandone- typically the last value still has
done: false, anddone: truesignals completion after all relevant values are returned - calling
nexton an exhausted iterator is not an error, will simply return the same completedIteratorResult
- typically the last value still has
- an
Iterablehas the@@iteratormethod that produces an iterator - consuming iterables:
- the
for..ofloop - spread operator
- array destructuring
- the
- ES6 also introduced generators, as seen previously
- the
Custom fibonacci iterator:
var Fib = {
[Symbol.iterator]() {
var n1 = 1, n2 = 1;
return {
// this makes the iterator an iterable as well
[Symbol.iterator]() { return this; },
next() {
var current = n2;
n2 = n1;
n1 = n1 + current;
return { value: current, done: false };
},
return(v) {
console.log("fib sequence stopped");
return { value: v, done: true };
}
}
}
}
for (var v of Fib) {
console.log(v);
if (v > 20) break;
} // prints 1 1 2 3 5 8 13 21
// fib sequence stopped- ES6 modules:
- uses
importandexport:exportexports the name bindings of variables:- that is, if a value is changed inside a module after its export, the imported binding will resolve to the current value
- default and named exports
- anything not exported stays private within the scope of the module
importimports from another module:- all imported bindings are immutable and read-only
- note that declarations as a result of importing are also hoisted
- ES6 can solve circular
importdependencies
- file-based, ie. one module per file
- statically defined API for each module
- singletons, eg. importing a module gets a reference to one centralized instance
- aims to replace traditional module patterns eg. AMD, UMD, and CommonJS
- uses
Traditional module patterns:
// asynchronous module definition (AMD), eg. RequireJS:
// define(dependencies, callback), RequireJS handles loading dependencies
define(['jquery', 'underscore'], function($, _) {
function a() {...}; // private method, not exposed
function b() {...};
function c() {...};
// exposed API
return { b: b, c: c };
})
// CommonJS, similar to NodeJS modules:
var $ = require('jquery');
var _ = require('underscore');
function a() {...};
function b() {...};
function c() {...};
module.exports = { b: b, c: c };
// universal module definition (UMD), both AMD and CommonJS compatible:
(function(root, factory) {
if (typeof define === 'function' && define.amd) {
define(['jquery', 'underscore'], factory);
} else if (typeof exports === 'object') {
module.exports = factory(require('jquery'), require('underscore'));
} else {
// browser globals
root.returnExports = factory(root.jQuery, root._);
}
}(this, function($, _) {
function a() {...};
function b() {...};
function c() {...};
return { b: b, c: c };
}));ES6 exporting:
function foo() {...}
var bar = 42;
export var baz = [1, 2, 3];
export { foo as qaz, bar };
export { foo as FOO, bar as BAR } from "qux"; // re-exportES6 default export nuances:
function foo() {...}
export default foo; // exports binding to a function *expression*
// so if foo is rebound, import reveals the *original* function
// vs.
export { foo as default }; // exports binding to foo *identifier*
// import is updated if foo is reboundES6 importing:
import foo, { bar, baz as BAZ } from "foo";
import * as qux from "qux"; // import all, default is qux.default-
JavaScript typed arrays provide structured access to binary data using array-like semantics:
- the type refers to the view layered on top of an
ArrayBufferie. a buffer of bits - different views eg.
Uint8Array,Int16Array,Float32Array - a single buffer can have multiple views, and a view can also be set at a certain offset or length
- eg.
var buf = new ArrayBuffer(32)creates a buffer - eg.
var arr = new Uint16Array(buf)creates a view over that buffer
- the type refers to the view layered on top of an
-
ES6 maps can use a non-string value as a key, unlike normal objects:
- use
get,set, anddeletefor mutating - supports
size,includes,hasmethods- and
values,keys,entriesiterator methods
- and
WeakMapis a map variation that only takes objects as keys:- when the object that is a key is garbage collected, the entry is alwo removed
- use
-
ES6 sets are collection of unique values:
- duplicates are ignored
- similar api to maps:
- with
addinstead ofset - no
get, onlyhas
- with
- a
WeakSetholds its values (only objects) weakly
-
arrays:
Array.ofis a an alternative constructor that avoids the defaultArrayconstructor gotcha of creating an empty slots array when passed a single number- eg.
Array.of(3)creates an array with element 3, whileArray(3)creates an array withlength3, but empty slots
- eg.
Array.fromreplacesArray.prototype.slice.callfor duplicating arrays or transforming array-likes into arraysArray.fromalso avoids empty slots- also takes a callback to transform each value
copyWithincopies a portion of an array to another location in the same arrayfillfills an existing array entirely or partially with a specified valuefindandfindIndexgive more flexibilty and control over the matching logic offered byindexOf
-
objects:
Object.isis similar to===, except it correctly distinguishesNaN -0 +0Object.getOwnPropertySymbols,Object.setPrototypeOf,Object.assign
-
numbers:
- many new mathematic utilities, eg.
cosh,hypot,trunc Number.EPSILON,Number.MAX_SAFE_INTEGER,Number.MIN_SAFE_INTEGERNumber.isNaN,Number.isFinite, andNumber.isInteger
- many new mathematic utilities, eg.
-
strings:
- unicode aware string operators, eg.
String.fromCodePoint,codePointAt,normalize String.rawtag function to get raw strings without escape sequence processingrepeatto use repeat stringsstartsWith,endsWith,includes
- unicode aware string operators, eg.