"... the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network..."
Within 6 to 7 months ... moved more than 200,000 (!?) servers, and 150 applications, and had more than $2 million in computer yearly spend moved over to AWS. ... Azure had a crucial government security certification known as Criminal Justice Information Services (CJIS) and Amazon didn't... In a few months AWS were certified... so Motorola opted for AWS.
... There is one big downside to choosing Amazon: it's harder to keep track of costs.... AWS pricing model is very complicated, that is a drawback... Motorola hired a cloud consultant, 2nd Watch, to help it move to AWS, size everything properly, train the IT staff and set up its own internal systems for watching usage ...
Disclosure: Jeff Bezos is an investor in Business Insider through his personal investment company Bezos Expeditions."
Apparently AWS has very efficient core business (VM hosting), and many other services are added by ecosystem of third party partners, in similar way Microsoft does in enterprise. Even some other cloud providers are moving to focus on services. Is it "winner(s) take all" market?
"Amazon Web Services (AWS) pricing on basic services declined 10% to 20% annually since 2014
So price cuts are stabilizing while the big three cloud vendors keep reducing costs from their infrastructure. That means that profit margins for the cloud providers should improve, which is good news for investors. And, the report’s authors conclude that thetotal market for public cloud is so huge and that cloud adoption rates are still so low, that there is room for all of the big cloud providers to grow—and profit—going forward."
“Nearly all of us buy into what I call the myths of happiness—beliefs that certain adult achievements (marriage, kids, jobs, wealth) will make us forever happy and that certain adult failures or adversities (health problems, not having a life partner, having little money) will make us forever unhappy. This reductive understanding of happiness is culturally reinforced and continues to endure, despite overwhelming evidence that our well-being does not operate according to such black-and-white principles ...” Hedonic adaptation: We get used to good (and bad) in our lives very quickly. "human beings have the remarkable capacity to grow habituated or inured to most life changes we are prone to take for granted pretty much everything positive that happens to us"
Appreciation: see the best in others
respond "with interest and delight"Don’t point out all the things that could go wrong (active-destructive)
"The most robust strategy to boost optimism is keeping a journal regularly for ten to twenty minutes per day, in which we write down our hopes and dreams for the future"
circadian rhythms: daily cycles
90 to 120 minute cycles:
first hour and half: high energy
vigorous and focused
then a 20 minute "dip": fatigue, lethargy, and difficulty concentrating
our energy oscillates: we’re focused and on. Then we need to relax and turn off (for 15-20 minutes)
Money does not make you happy:
A mountain of research has shown that materialism depletes happiness, threatens satisfaction with our relationships...
As philosophers, religious figures, and humanistic psychologists have long contended, the pursuit of money and reputation redirects our energies and passions away from deeper and more meaningful social connections and growth experiences and prevent us from achieving our full potentials
First, don’t spend money on “stuff”—you’ll hedonically adapt to that. Rather, spend money on experiences, developing ourselves, connections.
Spend money on others not yourself. (that will make you more happy, research shows)
Spend money to give you time.
Spend money now but wait to enjoy it ("anticipation" makes you happy)
The key to happiness and health is not how intensely happy we feel, but how oftenwe feel positive or happy.
One of the surest ways to focus on the future without dwelling on a seemingly idyllic past is by working toward significant life goals ‘There is no happiness without action’; there is no
happiness without goal pursuit...choose goals wisely;
Goals must be intrinsically (internally) rather than extrinsically (externally) motivated.
Goals must satisfy innate human needs (such as the need to be an expert at something, to connect with others, and to contribute to our communities, rather than simply desiring to berich, powerful, beautiful, or famous)
Goals must be aligned with our own authentic values; they must be reachable and ﬂexible; and, ideally, they should focus on attaining something rather than evading or running away from something.
The pursuit of all of these types of goals has been found to be associated with greater happiness, fulfillment, and perseverance.
Although we can (and should) reach for our loftiest dreams, we need only to begin by breaking the goals down into sub-goals and daily aims.
“the entire ‘follow your dreams’ oeuvre places a heavy emphasis on goals achievement rather than goal pursuit... is wrong, since we hedonically adapt to new state quickly.
"ZeroMQ defines a number of socket types in order to support very distributed and fault tolerant applications, the ones we are interested in are as follows:
REP: The only thing this socket does is receive requests and then reply to them.
REQ: This socket is the opposite of REP - it sends requests and reads replies to them.
PUB: This socket broadcasts (publishes) information to anyone who is listening.
SUB: This socket subscribes to a PUB socket and listens to all its broadcasts.
ROUTER: This socket can be used as a multi-user REP socket. It can receive requests from many other sockets and reply to all of them. ROUTER sockets store the identity of the source of the message before sending the message to the application, and the application receives messages from all origins. When replying to a message, the ROUTER socket will send the reply to the origin of the request.
DEALER: This socket allows round-robin communication between sets of sockets. If a message is sent to a DEALER, the DEALER will send to all connected peers. This allows sets of sockets to communicate without explicit knowledge of all the sockets in the set."
"nteract is a desktop application that allows you to develop rich documents that contain prose, executable code (in almost any language!), and images. Whether you're a developer, data scientist, researcher, or journalist, nteract helps you write your next code-driven story.
Azure App Service (used to be called Azure Web Sites) is a "PaaS" (Platform as a Service)
that is based on Windows VMs, at least it was until recently.
Now, there is also a Linux based option, and that is based on containers!
So not only the web apps can run on Linux platform, the users can deploy a (Linux based) containers in a very simple way! Since .NET core can run on Linux (containers) also the whole deployment can be done directly from Visual Studio (maybe even from a Mac).
Azure App Service is the most convenient way to deploy web applications on Azure
and since even a single instance comes with 99.95 SLA it is also most const effective for small to medium loads.
What is Azure Key Vault? | Microsoft Docs "Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud applications and services. By using Key Vault, you can encrypt keys and secrets (such as authentication keys, storage account keys, data encryption keys, .PFX files, and passwords) by using keys that are protected by hardware security modules (HSMs). For added assurance, you can import or generate keys in HSMs. If you choose to do this, Microsoft processes your keys in FIPS 140-2 Level 2 validated HSMs (hardware and firmware)."
"Azure Container Registry (ACR) is a private registry for hosting container images. Using the Azure Container Registry, customers can store Docker-formatted images for all types of container deployments. Azure Container Registry integrates well with orchestrators hosted in Azure Container Service, including Docker Swarm, DC/OS and Kubernetes. Users can benefit from using familiar tooling capable of working with the open source Docker Registry v2."
"The columnstore index is the standard for storing and querying large data warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse over traditional row-oriented storage, and up to 10x data compression over the uncompressed data size. Beginning with SQL Server 2016, columnstore indexes enable operational analytics, the ability to run performant real-time analytics on a transactional workload."
A columnstore is data that is logically organized as a table with rows and columns, and physically stored in a column-wise data format.
A rowstore is data that is logically organized as a table with rows and columns, and then physically stored in a row-wise data format. This has been the traditional way to store relational table data. In SQL Server, rowstore refers to table where the underlying data storage format is a heap, a clustered index, or a memory-optimized table. ... For high performance and high compression rates, the columnstore index slices the table into groups of rows, called rowgroups, and then compresses each rowgroup in a column-wise manner. The number of rows in the rowgroup must be large enough to improve compression rates, and small enough to benefit from in-memory operations.
Subjects - OpenStax "Open source. Peer-reviewed. 100% free. And backed by additional learning resources. Review our OpenStax textbooks and decide if they are right for your course. Simple to adopt, free to use. We make it easy to improve student access to higher education."
Docker is a tool for managing OS containers.
Those containers used to be based on Linux only,
and now containers are available on Windows 10 and Windows Server 2016 also.
Containers for Linux and for Windows are not the same, since OS is different. Still, Linux containers can run on Windows by ruining a Linux VM in Hyper-V.
So one can use both Linux and Windows containers on Windows host, including older versions of Windows (7, 2012 etc).
Complicated? Wait, this is just a start. But in fact it is not too complicated, and it is useful.
Support for containers is based on virtualization technology included in OS:
Linux or Windows or even Solaris that was the first one to have them.
Essentially this is a set o system level APIs.
Docker is "just" a convenient tool that helps with managing OS containers. Apparently this was very important innovation, since APIs ware present in Linux kernel for years and not used much before Docker. In fact a variant of "containers" technology was included in Windows 8 also for Windows store apps, and enhanced in Windows 10 and Sever 2016.
Sun "invented" containers that made Solaris very efficient in hosting many web sites.
Containers are created as "images" (zip files) that include a "difference" between "clean base OS"
and desired content of OS instance with additional files, and in case of Windows also Registry configurations etc. Those "instance" packages are usually much smaller that VM images since they only include files that are not present in the base OS.
A container image can be also "built" on top of another container image, almost like object oriented class inheritance or snapshots of virtual machines. This way it is easy to create updates and variations of containers. Very powerful and compact.
The key benefit of using containers is efficiency: running instances of an OS in a container is leveraging storage and memory of host OS, this way not duplicating resource usage, and can start in milliseconds, compared with seconds or even minutes required for full OS to start. A container instance is actually running as a standard process in a host OS, nothing too complicated.
Thanks to those special virtualization APIs in OS kernel such process can internally extend what resources are visible to applications running inside of container. That is the key "magic".
So containers are efficient and convenient way to package application with required resources
and Docker is a very popular tool for managing them. But this is not all, it is just beginning!
Since containers are lightweight and convenient, they are perfect match for "cloud" applications,
i.e. utilizing "micro-services architectures" that can have a very large number of instances.
Docker is good for manual and interactive managing of a few containers in one host OS. For managing hundreds or thousands of containers some more powerful and more complex tools are needed and available. After all, Docker is a startup business and they also need "exit strategy" for making some money. There comes Docker Swarm, a commercial platform for managing containers in many VM hosts. There are some alternative solutions for same of similar purpose, like Mesos DCOS, Kubernetes by Google etc. Microsoft Azure has its own Fabric platform, but it conveniently supports all major containers management tools, making it easy to deploy and configure required clusters of resources for creating and managing containers of various kinds.
Containers are efficient, but due to implementation security is relatively limited to process insulation available in host OS. So when implementing containers in Windows Server 2016, Microsoft decided to create not one, but two variants of containers: "Windows Containers" and "Hyper-V Containers". As name suggests, the first kind is standard OS based, and second one is based on Hyper-V virtual machines technology, with same security insulation of running container instance as VM, but slightly more overhead compared to Windows Containers.
"Microsoft’s Azure Bot Service, now in preview, builds on the company’s much touted Bot Framework, which aims to support all the popular chat and texting applications in the universe.
“This is a full application, a managed platform that does all you need to host a bot,” Microsoft executive vice president Scott Guthrie told Fortune."
The service relies on Azure Functions, which like the Lambda service from Microsoft rival Amazon Web Services, lets developers quickly build in capabilities that are triggered by a user action or some sort of software trigger or event. "
"Microsoft’s artificial intelligence research division has announced a partnership with the Elon Musk-backed nonprofit OpenAI. As part of the deal, OpenAI will get access to Microsoft’s latest virtual machine technology for running large-scale AI training and simulation exercises, while Microsoft will have cutting-edge research conducted on its Azure cloud platform. OpenAI, founded last December by Musk and Y Combinator president Sam Altman, is focused on developing AI that has long-term positive impacts on society, instead of software that could potentially be used for harm or is solely profit-motivated in its creation."
"In this episode of the O’Reilly Data Show, Ben Loricaspoke with Christopher Nguyen, CEO and co-founder of Arimo. Nguyen and Arimo were among the first adopters and proponents of Apache Spark, Alluxio, and other open source technologies. Most recently, Arimo’s suite of analytic products has relied on deep learning to address a range of business problems."
Near the end of the interview, Nguyen suggested that general purpose processors are less efficient / spending more energy for simulating neural networks, and hardware improvement lead to more specialized hardware for ML and AI, leading to eventual specialized alternative to classic transistor that would be more similar to neurons, and could represent not only states 0 and 1 but also values in between. He didn't mention quantum computing where value is not deterministic, just more efficient basic hardware unit.
CPU (Central Processing Units) are general, but slow in emulating specialized processing like those used for AI
xPU (like GPU, Graphical PU) are much more efficient in parallel operations, so they are now used for AI processing.
FPGA (Field Programmable Gate Arrays) are customize-able for tasks, and Microsoft is using them on Azure to speed up specialized operations like database and web.
ASIC are custom designed for maximum performance of specialized tasks, not programmable
"Scientists at IBM Research have created by far the most advanced neuromorphic (brain-like) computer chip to date. The chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores. Built on Samsung’s 28nm process and with a monstrous transistor count of 5.4 billion, this is one of the largest and most advanced computer chips ever made. Perhaps most importantly, though, TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Yes, IBM is now a big step closer to building a brain on a chip."
"The animal brain (which includes the human brain, of course), as you may have heard before, is by far the most efficient computer in the known universe. As you can see in the graph below, the human brain has a “clock speed” (neuron firing speed) measured in tens of hertz, and a total power consumption of around 20 watts. A modern silicon chip, despite having features that are almost on the same tiny scale as biological neurons and synapses, can consume thousands or millions times more energy to perform the same task as a human brain."
In this 30‑minute, hands‑on virtual lab, you will be guided through the basics of installing, deploying, and managing a Docker container as well as, fundamentals for incorporating Docker on Hyper‑V into your current development plans."
# open PowerShell as an admin
PS> Get-WindowsOptionalFeature -Online -FeatureName containers
PS> Get-WindowsOptionalFeature -Online -FeatureName *hyper*
PS> Restart-Computer -Force # maybe not needed?
Microsoft PowerApps and Flow are generally available starting tomorrow - The Official Microsoft Blog "Both PowerApps and Flow will be included with Dynamics 365 and in the subscriptions of millions of Office 365 Enterprise and Business Premium and Essentials users. PowerApps and Flow join Microsoft Power BI to create what we on the team refer to as the power trio. Collectively they allow “power users” (read: non-developers) to get done what would have in the past required programming skills, with each playing a specific role:
Microsoft reimagines open source cloud hardware | Blog | Microsoft Azure "The building blocks that Project Olympus will contribute consist of a new universal motherboard, high-availability power supply with included batteries, 1U/2U server chassis, high-density storage expansion, a new universal rack power distribution unit (PDU) for global datacenter interoperability, and a standards compliant rack management card. To enable customer choice and flexibility, these modular building blocks can be used independently to meet specific customer datacenter configurations. We believe Project Olympus is the most modular and flexible cloud hardware design in the datacenter industry. We intend for it to become the foundation for a broad ecosystem of compliant hardware products developed by the OCP community."
msgpack/spec.md at master · msgpack/msgpack · GitHub "MessagePack is an object serialization specification like JSON. MessagePack has two concepts: type system and formats. Serialization is conversion from application objects into MessagePack formats via MessagePack type system. Deserialization is conversion from MessagePack formats into application objects via MessagePack type system."
Netflix "Chaos Monkey" and related tools are invaluable for ensuring that cloud based applications still continue working in cases of failed services.
Microsoft is also using similar techniques, both for Windows and Azure.
"Chaos Apes" are not only random but a bit "intelligent" chaos tools that learn over time.
"Chaos Monkey randomly terminates virtual machine instances and containers that run inside of your production environment. Exposing engineers to failures more frequently incentivizes them to build resilient services.
Chaos Test Service | Microsoft Azure "Learn how to write a service that uses Microsoft Azure Service Fabric's built-in Chaos Test to exercise your code's fault tolerance by putting a little chaos in your cluster. Service Fabric includes a suite of tools specifically designed to test running services. You can easily induce meaningful faults and run test scenarios to exercise and validate the numerous different states and transitions a service will experience throughout its lifetime, all in a controlled and safe manner. The chaos test induces random faults - everything from moving replicas to restarting entire nodes." Standalone Network Emulator Tool – Knock, knock! Who's there? "NEWT (Network Emulator for Windows Toolkit) is a software-based emulator which can emulate the behavior of both wired and wireless networks using a reliable physical link, such as an Ethernet. A variety of network attributes are incorporated into the NEWT emulation model, including round-trip time across the network (latency), the amount of available bandwidth, queuing behavior, packet loss, reordering of packets, and error propagations. NEWT also provides flexibility in filtering network packets based on IP addresses or protocols such as TCP, UDP, and ICMP."
"This Bolt versus Model S 60 comparison is the exhibition before a more appropriate title fight—Bolt versus Elon Musk’s $35,000 Tesla Model 3. Set to arrive in late 2017 with a price much closer to the Bolt’s"
Bolt: 238 miles of EPA-rated range, $41,780 before rebates, 0 to 60 mph in 6.3 seconds
State votes are not proportional to state surface area on the map.
A better data visualization is with bars proportional to votes.
Here is my "data science experiment",
with interactive changing color and counts when click on state bars.