Massively Multiuser Online Gaming (MMOG)

As mentioned on the home page Elite is not a bad place to begin thinking about the conceptual nature of gaming and its societal impact. While admittedly a mere minnow in the light of what has now followed (Dark Orbit, Battlestar Galactica, Ogame) it highlighted the thought process, the storyline ambition, the conscious construction of an alternate galaxy sized reality into which people could enter to leave behind their current reality. Pandora’s box was irreversibly opened!

Once the engagement power of such alternate realities was understood it wasn't long before other genre's joined the fray. MMOG's now deliver new realities in genre's such as:

Note: These categories are probably interchangeable. More information on Videogames & MMOG

The point is that from this early 1970's genesis emerged a global industry to feed the needs of consumers worldwide. As we saw that industry is now worth $100Bn worldwide (Gartner) with consumers now increasingly discerning on what makes for a good game. So aside from the pure pleasure of playing these amazing games, my interest in MMOGs stems from its ceaseless, fundamental and obligatory need to be bleeding edge across all of the following (note: non-exhaustive list) domains of human knowledge:

  • Art & Graphic Design - without astounding graphics for the infinity of input variables such as: characters, worlds, ships, cars, weapons, space craft, etc. players of the game would become bored and migrate to another provider or platform;
  • Economics - of course in a virtual economy the “central bank” can keep printing money and adding infinite resources but this can impact the reality of the situation and often removes the underlying Maslovian impetuses and behavioural motivations that people need to have to remain interested in playing, learning, and building. Managing a virtual-economy exhibits the same laws of micro and macro-economics as that of the real world. But more than this the economics involved in MMOGs is now pushing the boundaries of our knowledge of this subject;
  • Psychology & Behavioural Science - an MMOG is designed specifically to enable collaboration, alliances, and conflicts. But these interactions are by nature random and uncontrolled and once the game is “live” they can take unexpected turns. From a design perspective goals challenges and rewards can be baked-in but the law of unforeseen consequences dictates that the behaviours of those playing will become grouped into psychologically aligned patterns based on rational and irrational human (the player) motivations. This leads the designers to a need for understanding the behavioural psychology of swarms, crowds, leadership, conditioning, reinforcement, etc;
  • Mathematics - almost the founding basis for MMOG designers and writers. Early on in the development from single-player to Multiuser Domains (MUDs) the need arose to understand the dynamics of player interaction in a finite space. Germinating from these early algorithms an entire universe of mathematic understanding was needed to feed the appetites of players. Game theory in particular is necessary to try to statistically estimate the evolution in-game of players, NPCs, etc;
  • Literature - MMOGs require a really well evolved and cohesive storyline. The base premise for the world, domain, universe, etc. that the designer creates must hold water. NPC identities, guilds, alliances, fiefdoms, etc. must all be thought through in minute detail in conjunction with their characteristics, such as, morality, ethics, and politics so that the underlying construct exhibits a sense of realism;
  • Education - in some part people play games to learn, to evolve, and to grow. This could be as rudimentary as the motivation of “one-upmanship” but game designers need to carefully think through the ultimate goals of any MMOG far beyond simplistic notions – for instance, even the most obvious linear construct of going from Level 1 to Level 100 needs careful evaluation of the learning curve, the ability to fall backwards, the ability to skip ahead, the expected time taken at each increment and the embedding of a “Goldilocks” approach to each successive step. Games today are far from this embryonic example with (n) scale non-linearity’s needing very deep modelling as part of the design phase;
  • Sociology - we all understand that societies develop over time but the question is how do they develop, how are we so different in 2013 from when I grew up in the 1970s? An inordinate amount of factors impact our daily lives without us even noticing. Those factors need to be mirrored by MMOG designers within the game. Thoughts and (ultimately) programming needs to attend to incremental changes that propel forward (or regress) the society or societies in the game. Cultural factors, familial units, political faction development, gender considerations, generational & economic divides, religious implications, and many more must be considered and planned;
  • Anthropology - especially sociocultural anthropology provides input for game designers into the logic of societal developments through ethnography. Game designers can learn from the development cycles of human history and to try to extrapolate from that the expected progression of societies in their games. This is especially important when designing large scale computer generated societies which will interact with players either peripherally or directly. These societies will need to evolve in the game as much as the human players and the impact and type of interactions across thousands even millions of players needs to be understood;
  • Political Science - closely linked to many of the above areas. Depending on the type or nature of the MMOG there is a critical link to develop political ideologies for groups within the game. An obvious example is a war situation where your tribe, clan, horde, guild, nation or country is from the outset pitted against another. The automatic political allegiance story needs to make clear as to where players will fit within the political structure of the war effort, their locus standi if you like within their community and they can change this, how the war will progress, whether there are Machiavellian factions within your own side etc. This can build intrigue and interesting alliances that coalesce to make the game more enjoyable but thought needs to be given during design to the political processes, its systems, and its ideology;
  • Monetization & Commercial Modeling - in the early days of Elite and the like the monetization model for designers was prima facie with a single upfront payment delivering access to the game software. Over time and with the rise of the Internet online games monetized using a simple subscription mechanism – denoted Pay-to-Play (P2P). With the advent of SmartPhones, Tablets and casual games this P2P model metamorphosed into a Free-to-Play (F2P) model with new monetization apparatus being engaging to elicit revenues for the game designers and publishers;
  • Law & Intellectual Property Rights - finally for this diatribe is the critical area of legal rights, rights management, ownership, patents and copyrights. Collectively the area of law most affecting MMOG is that of Intellectual Property Rights (IPR). MMOGs on many fronts are forcing legislatures to break new ground. In the early stages the distribution of games clearly fit within the extant strictures of common practice International Trade. However with globalisation and the Internet new contrivances were unveiled to keep pace of cross-border players. A simple example would be the tax law implications for casino style gaming where a foreign national plays and wins – is there tax? Other significant developments are in the area of virtual building profits (whether character enhancement or virtual environment enhancements) where a player builds through many hours of play a set of features or capabilities and sells them on the secondary market for “real” money – is this taxable?

All the Worlds a Triangle

The purpose of this site is to attempt to highlight the criticality of videogames as a component of the “convergence” of technologies (Cloud, Gaming/MMOG, Gamification and BigData) that is clear to many of us inside the IT world. A key component of understanding this convergence is the emergence of the humble Graphics Processing Unit (GPU) as part of the bedrock for our future exploitation of computer processing as a whole.

The PC and home gaming console revolutions required affordable microprocessors but given 50% of our brains neurons are associated with vision we needed something even more important – displays & graphics – we needed to be able to display our outputs. Neither business nor home users would have invested as heavily as they did if we couldn’t “see” into our devices, if we couldn’t input characters and visualize the results. This race had been going on for a couple of decades beforehand with businesses and scientists also needing to view inputs and outputs. The breakthrough came in the seminal work of Ivan Sutherland for his Ph.D. thesis which introduced the world to “Sketchpad” and interactive computer graphics.

We need to briefly return to the late 1800s to understand computer display graphics and the works of the famous artists Georges Seurat and Paul Signac and other pointillists of the period. The approach of pointillist artists was to form a picture from single points or dots of colour in such a manner that viewed at a distance the dots form a cohesive image. Irrespective of whether or not computer graphics were ever deterministically construed as a form of pointillism the parallels are obvious.

An image or visualization can be presented to the human eye formed from many very small points of colour. This approach is now the cornerstone for all computer graphics with pixels - single indivisible points of illumination projected onto a display to form characters, lines and ultimately images. For computer processing it means that we can think of the display as being made up of a grid of single points and manipulate the existence and colour of those single points. For instance, (Now I know I should have listened better to Mrs. McKenna’s Cartesian coordinate classes! Please also be aware that a computer screen is not precisely a Cartesian system and that mapping is required to convert to a display format but for the purposes of description hopefully this simplified view should suffice) if we first divide up a display unit from the top left hand sides highest corner to the lowest right hand side we can create a grid thus:

Now we can tell the computer to “turn on” pixels (we will deal with colours later in Section 2.0) at certain points in the coordinate system to create lines and shapes. So in a simplistic sense the sequence of binary “turn on, turn offs” could be built to display little figures. On the left below when pushed to the display processor, with 1’s meaning on and 0 meaning off, the binary sequence would be displayed as on the right:

Once a programmer knows how to place the “0’s & 1’s” onto a display screen they can engage an appropriate time delay and mimic movement of lines and shapes by changing the placement of the “0’s & 1’s” over a time cycle. This gives us animation in the same way you could draw a stick figure on the edges of a book and flick the pages to create a sense of animation. The last piece of this jigsaw returns us to “Sketchpad” which in paradigm terms created the link between an external human movement and the computer display visualization. This innovation meant that the human could interact with and control the movement of the pixels without needing to understand precisely how this was done. It abstracted all of the complexities involved with doing this from the user. It didn’t take long for other innovators to grasp the significance of interactivity and how it could be used to build an entire industry.

Oh no mathematics - OH YES MATHEMATICS!!

“give me a stick long enough and a pivot and I shall move the world”

Archimedes

Once we had a way of representing points and from points to lines and lines to shapes, it didn't take long for us to realize that with a big number crunching machine and a way of defining lines, curves, ellipses, circles, polygons and all manner of regular and irregular shapes we could get the graphics to be rendered easily. Welcome back the mathematics of Euclid, Descartes, Desargues, Hilbert and many more.

The algorithms of these giants of mathematics enabled us to draw shapes. But they also enabled far far more. These algorithms could do far more than simply drawing a line or a shape delivering extraordinarily efficient mechanisms for:

  • Relection;
  • Rotation;
  • Translation;
  • Glide reflection;
  • Scaling;
  • Shearing;

The benefits of this geometric approach coupled with ingenious algorithms that leveraged the power of a computers number crunching could do all manner of operations based on knowing the points or vertex’s:

  • Draw, reflect, rotate, transform (move), and scale lines & curves;
  • Draw, reflect, rotate, transform (move), and scale shapes (polygons, circles, ellipses, etc.);
  • Solid fill (basically add a colour in between the lines the shapes;
  • Pattern fill (super impose a checkerboard or other pattern to the human eye) shapes;
  • Clipping (cut a slice out of) lines and shapes; and of course
  • Composites of more than one of the above.

The return of the humble triangle

The title of this section was the giveaway. The simplest shape with volume is a triangle and mathematically it can be described as simply three vertices. Two triangles positioned side by side share two vertices (which saves memory space!). It didn't take long for us to realise that we could describe literally any shape using triangles. Another excellent advantage to triangles is that when we come to fill in surfaces we can work out the area of triangles extremely efficiently. Going back to school days the area of a triangle was simple 1/2 base * height. The trouble was it wasn't simple enough to be processing efficient for a computer relying on working out the base and height first.

What was needed was a simpler method of working out the area based on the knowledge we had i.e. the three vertices. Thankfully for all of us both Rene Descartes (1596 – 1650) and Pierre de Fermat (1601 – 1665) had foreseen the need in 1970 for computer graphics engineers to have such a simple method for working out the area of a triangle when we know the vertices A, B and C (okay that might be bit of a fib!). The method involves the linear algebra “determinant” of a square matrix. Basically the determinant “D” of a three by three matrix of any numbers is calculated as:
“Hang on just a second, Eamonn!” you’re probably saying, “we don’t have a three times three matrix of points. In our coordinate system we have only two points!” Let’s work it out using a simple right angled triangle whose area is trivial for us to work out in advance. This triangle is not only right angled but we know easily its height (6) and base (4) so using the old fashioned method of this gives us a known area of 12. So let’s double check this with our determinant method!
So we now know that: six simple multiplications, five additions and a division by 2, yields the area of any triangle when you know the three vertices. To a number cruncher that can be done fast, very very very fast.
Sources & Further Reading
  • http://vimeo.com/16292363
  • https://people.richland.edu/james/lecture/m116/matrices/applications.html
  • https://people.richland.edu/james/lecture/m116/matrices/area.html
  • http://en.wikipedia.org/wiki/Ren%C3%A9_Descartes
  • http://en.wikipedia.org/wiki/Pierre_de_Fermat
  • http://en.wikipedia.org/wiki/Determinant
  • Boreskov, A Shikin, E ‘Computer Graphics: From Pixels to Programmable Graphics Hardware’ (2013, CRC Press)
  • Shirley, P Marschner, S et al ‘Fundamentals of Computer Graphics’ (2000, CRC Press)
  • Foley, J (Editor) ‘Computer Graphics: Principles and Practice’ (1996, Addison-Wesley Publishing)
  • Govil, S ‘Principles of Computer Graphics: Theory and Practice Using OpenGL and Maya®’ (2004, Springer)
  • Klawonn, F ‘Introduction to Computer Graphics’ (2012, Springer)
  • Johnson, Amos ‘Basic Concepts for Computer Graphics’ (2007, iBook Store)
  • Ryan, D ‘History of Computer Graphics’ (2011, Authorhouse)
  • Cohen, J ‘Visual Color and Color Mixture: The Fundamental Color Space’ (2001, Google Books)

So what!

Good question! Due to these initial breakthroughs in understanding and the employment of triangles to enable fast calculation of graphics it became clear as the technology evolved that the calculations were trivial enough to be embedded in the hardware, which of course means they run even faster. Further iterative improvements during the 1990s led to the introduction in October of 1999 of the Nvidia GeForce256 the world’s first GPU. For the first time a single processor came into existence with integrated transform, lighting, triangle setup, clipping, and rendering. The chip was capable of 10million triangles per second. This fundamentally altered the architecture of computing. With subsequent evolutions, Nvidia added hardware functionality and programmable components which by 2006 delivered GPU hardware that carries out the following tasks on chip:

  • Pixel shading;
  • Multi-texture;
  • Programmable vertex shading;
  • Bump mapping (instead of interpolated vertex) to compute lighting per pixel;
  • Cubic texture mapping;
  • Projective texture mapping;
  • Volume texture mapping;
  • Hardware shadow mapping;
  • Anti-aliasing – super-sampling and multi-sampling;
  • Multiple vertex and pixel shaders;
  • Programmable pixel shaders;
  • 64-bit colour;
  • 64-bit floating point processing;
  • High dynamic range imagery; and
  • Real time tone mapping.

It was clear in the late noughties that the notion of harnessing the extraordinary processing speed of a GPU (admittedly to do certain specific things) was certainly not going to be left un-addressed and with the advent of programmable stream processing it was a certainty. Today it is clear to computer scientists and programmers that certain low latency tasks will always require a general purpose non-GPU approach but there are many areas of today’s number crunching batch processing runs that could benefit from the utilization of a GPU instead. The mitigating factor though is that this requires extensive reprogramming of the base applications. That is no mean feat! In addition for many of the early years the only approach to programming a GPU was through the OpenGL application programming interface (API) framework. What was needed was a programming framework that would expose the GPU to programmers familiar with higher level languages (HLLs) such as Fortran, C, C++, of Java. Necessity stepped in once again with the introduction of CUDA and OpenCL:

  • CUDA - Nvidia’s Computer Unified Device Architecture (CUDA™) is a parallel computing platform makes it possible for programmers to harness the power of highly parallel graphics processing as part of their standard kit bag;
  • OpenCL- Open Computing Language (OpenCL) is a programming framework from Khronos (of OpenGL fame) that provides a heterogeneous platform for parallel computing.

Now we're ready to answer the "so what?" question. The "so what?" is that in the pursuit of better and better computer graphics for videogamers two distinct fields have inadvertently converged videogame GPUs plus the adoption of new programming frameworks has resulted in the creation of a whole new era of super-computers. Today the design of many of the worlds’ supercomputers is probably pointing the direction forward for general computing as a whole. This design takes the best of both CPU and GPU worlds that we reviewed in Section 5 and builds a symbiotic architecture that leverages the strengths of both to achieve optimum results. The website Top500.org lists the worlds’ top 500 supercomputers using the FLOPS measurement we outlined earlier. The Top 5 at the time of writing (December 2013) are:

  1. Tianhe-2 with 33.86 Petaflops – no GPU usage but a highly parallel co-processor the XeonPhi;
  2. Titan Cray with 17.59 Petaflops – 18,688 AMD CPU’s with 18,688 Nvidia Tesla K20X GPU’s;
  3. Sequoia IBM BlueGene/Q with 17.17 Petaflops – no GPU usage, leverages IBM PowerPC A2 processors;
  4. Fujitsu K with 10.51 Petaflops – No GPU usage, leverages 80,000 SPARC64 VIIIfx processors;
  5. Mira BlueGene/Q with 8.59 Petaflops – No GPU usage, leverages 65,536 PowerPC 770 processors.

Further readings

MMOG Briefing

This short paper broadens (slightly) the introduction above to outline in more detail the essential characteristics of videogaming and MMOG's.

Read More

Monetizing Free-to-Play (F2P) Games

This short presentation provides a stylized model for understanding how F2P games providers actually make profit.

Read More

The Relevance of Videogames

This longer paper introduces the concepts touched upon above in much greater detail, covering the topics of CPU and GPU architectures, the impact of 3D, pipelining, superscalar, multi-threading, SISD, SIMD, stream processing, OpenGL, GPGPUs and a convergence example based on BigData & MapReduce.

Read More