Defining Games Via Intrinsic Properties

General / 01 May 2018

My position on Salen and Zimmerman's distinctions between non-digital and digital games is that they skillfully identify key elements for supporting a difference, yet include the fitting disclaimer that they are not enough to truly distinguish them as entirely unique.  Basically, I agree with their approach.  In addition to this reading, other books attempt to define terms like game and play, but this is obviously not an easy task.  I do not think it may even be feasible to have a universal definition for such terms, considering the limitations or exclusions that verbiage place on any form of a definition.  And despite going through several examples to differentiate digital from non-digital, Salen and Zimmerman explicitly isolate the properties of games from the various media in which they occur.  I think that they succinctly resolve any confusion by noting that "the underlying properties of games are ultimately more similar than different" (Salen 90).


Digital technology is simply a medium and the distinctions from non-digital games noted can easily be applied to many non-digital games as noted in the reading's various examples.  True, these distinctions may be more evident in a digital format like a PlayStation game console, but they also appear in may non-digital games, which reinforces the "underlying properties of games" as a consistent foundation among games (Salen 90).  
A recent pastime of mine has been tinkering with a Rubik's Cube, which I think embodies the definition of game as noted in the reading: "A game is a system in which players engage in an artificial conflict, defined by the rules, that results in a quantifiable outcome" (Salen 79).  Of particular note is the specific distinction that they make for puzzles as a subset of games (Salen 84); I find this to be an adept observation and accurate perspective.  With the Rubik's Cube, the player is in conflict with the disorder of colored squares, the rules clearly limit the type of operations permissible when rotating sides of the square, and the outcome is a solved, fully coordinated cube.


Another recreation of mine is playing games on the PlayStation console.  Thinking of Salen and Zimmerman's digital game distinctions, playing the console games certainly showcases the 4 traits (Salen 87-89).  The few buttons on the controllers allow for a limited range of interactivity, but the complex system, network, and encyclopedic-level of information propels the PlayStation game to a highly engaging conflict for the player that requires resolution.


But some of these traits can easily be applied to the Rubik's Cube as well.  The interactivity is limited by the few operations permitted to rotate the cube's sides, and could be construed as immediate if using a version of the game designed for "speedcubing" (Speedcubing).  The hidden complex system behind each square holds the key to understanding how they can each move around a cube system without being taken apart.  And with exposure to the intricate internal mechanics, the "information manipulation" trait becomes a bit more apparent (Salen 87).  Even thought it does not seem to exhibit a networked communication, the Rubik's Cube serves as an example of a non-digital game that shares many of the traits common to digital games.

The strength in Salen and Zimmerman's analysis and perspective of defining a game is the open-ended potential of different experiences to be categorized as a game.  With this, the process of game design can focus on the key principles that support these experiences.


As an additional note, the term experience appears in this reading along with several others where topics like games, play, and game design are discussed.  Reading the different perspectives, definitions, and analyses, it reminds of my studies in architecture where the term experience has been just as prevalent.  When I think of an experience, I think of how I perceive it; it's influenced subjectively.  And others would perceive and respond differently.  To me, this makes it difficult to clearly define a term like game or distinguish a non-digital and digital game, but it is their intrinsic properties that can precisely guide game designers through the process of creating experiences.


Works Cited

"Speedcubing - The Fastest Solving of the Rubik's Cube - Ever!" Rubik's, https://www.rubiks.com/speed-cubing/speed-cubing. Accessed 17 April 2018.
"How to take apart the Rubik's Cube and put it back together."  Ruwix: Rubik's Cube Wiki. https://ruwix.com/the-rubiks-cube/how-take-apart-disassemble-the-rubiks-cube-and-put-back-together/. Accessed 17 April 2018.
Salen, Katie & Eric Zimmerman. Rules of Play. The MIT Press, 2003.

Defining the Digital Medium

General / 01 May 2018

The Microsoft HoloLens is a digital artifact that I propose embodies Ceruzzi's definition of computer and Murray's digital medium characteristics.


Computer Definition
The head-mounted display (HMD) sits on one's head and serves as a mobile computer in which input is given by the user to solve problems through a network of data, communication, and other processes driven by circuits within the device (Ceruzzi, 1).  Its mobility is similar to that of other mobile devices like laptops, tablets, and phones, but the way in which the user interacts with it is its most distinguishable quality.  The various applications that can be accessed through the HMD interface are software programs operating through the the hardware and work with wireless communication networks to relay and compute data.  The interface  through which the user interacts with is displayed and superimposed on a set of lenses to allow the user to see both the interface and through the lenses to the real environment, creating an augmented reality experience.


Encyclopedic Capacity
The HMD epitomizes "our desire to get everything in one place," and it just happens to be on the user's head without depriving him or her from other non-involved activities (Murray, 6).  With access to various software applications, data storage, and communication networks, the user has the potential to access countless resources.  In addition, the ability to input new information through its interface allows for the aggregation to these data repositories.


Spatial Navigation
A coordinated "capability for embodying dimensionality" is defined through the HMD's built-in cameras and other hardware for geo-locating its use together with the real environment (Murray, 6).  The spatial navigation is utilizing the familiar real environment with a new superimposed interface, creating an augmented reality for the user.  As a result, the transition between reality and the computer "as a place" can be seen as blurred (Murray, 6).


Procedures Input
The hardware and software used with the HoloLens are programmed with the steps necessary to compute the assigned procedures, and together with user participation, define the foundation for its interactivity (Murray, 6).  The built-in software and access to other tools in its network are further enhanced by the various methods in which a user can provide input: cameras built into the HMD to identify unique hand movements that it interprets as input, a microphone for vocal commands, and a handheld accessory that communicates through a wireless connection to the HMD. 


User Participation
The HoloLens relies on user participation to react and process tasks.  Again, the distinguishable aspect of its interface is the way in which the user participates.  Visual cues to the HMD can range from looking through the lenses at a particular surface or tracked object, speaking commands, to making hand gestures.  While some of these are consistent with other traditional digital media, the natural actions of looking, speaking, and moving one's hands blur the line between reality and "the sense of participating in a world that responds coherently to our participation" (Murray, 6).


Other Criteria?
These four characteristics can create a foundation that "define the boundaries of the digital medium," but I can also see opportunities where other factors may contribute to a greater degree, more so than they already contribute to these criteria.  For example, communication networks are an important component in the encyclopedic criteria, but it may be that communication networks are emerging to be a criteria of its own, in the same way that the "'spatial' property is derivative of the procedural and participatory properties" (Murray, 6).  Personally, and probably to a large set of the population, being out of range from a communication network (loss of signal in a mobile device, lack of WiFi, and other network issues) feels like one is out of the loop to the point that the digital medium is no longer fully capable.


Works Cited

Ceruzzi, Paul E. A History of Modern Computing. 2nd Edition, The MIT Press, 2003.
"Microsoft HoloLens: Mixed Reality Blends Holograms with the Real World." YouTube, uploaded by Microsoft HoloLens, 29 February 2016, https://www.youtube.com/watch?v=Ic_M6WoRZ7k.
Murray, Janet H. "Inventing the Medium."  The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, The MIT Press, 2003, pp. 3-11.
"The World’s First Holographic Head-Mounted Display" Microsoft, https://www.microsoft.com/en-us/hololens/hardware. Accessed 27 March 2018.

Time Management Dashboard

General / 30 April 2018

Time management and documentation are critical elements in managing and fulfilling my work duties, but in them also resides enriched data to monitor my progress and evolution. The charts in the following dashboard were developed using the D3 JavaScript library that connects a dynamic CSV file linked from my daily tasks database to a CSS-generated SVG.  As a result, this is a live dashboard that is updated dynamically from database to server and ultimately through the data visualizations.  Scroll through the dashboard to the see just the first instances of D3-generated charts that I have developed.  Then check out the data analysis workflow at the bottom of this page to get a quick glimpse of how data is generated and evolves into valuable information.

The pie chart to the right showcases the allocation of time dedicated for roles and responsibilities as the Design Technology Manager and a Project Architect at HOK’s Gulf Coast Region.  The distribution notes the amount of Design Tech. Education & Documentation necessary to provide effective support in a very wide range of tools (see Tech. Analytics below).

The interactive scatter plot chart below is a visualization utilizing data from my daily task activities with the D3 JavaScript library for visualizing the distribution of design technology management tasks performed from the beginning of 2017 to the present.  Hover over each data point to see the corresponding data label above the chart with more specific information. The term Support Hours noted in the data labels is used to define a wide range of tasks involving design technology, but is the subset of General & Project Design Tech. Support (noted in the pie chart).  The scatter plot chart goes even further by differentiating the extent of tasks performed in support of the different offices throughout HOK.

Although this data collection is only for the past few months, an interesting trend is beginning to emerge in the scatter plot chart: the distribution of support is beginning to disperse more among various offices, particularly around the end of April and throughout May.

Throughout any given week, I use a variety of technology as Design Technology Manager for HOK (Gulf Coast Region), but also through my own endeavors in game programming, web design, and VR/AR technology. Both the bubble and bar chart below reflect my average utilization of technology itemized with each individual tool and presented graphically different to show the range and variation. The only exception to those listed here is Revit, which is and omitted for brevity and clarity.

DATA ANALYSIS WORKFLOW

The workflow to develop this dashboard requires the discipline to effectively manage the raw data regularly so that the live database linked to the data visualization stays current and can present valuable information.Airtable

Data input begins with daily task entries within an Airtable table (sample screenshot below).  Various fields are populated with task information, such as the nature of the task, amount of time, design technology used in the task, and several other categories to assist with different analyses.MySQL

Data is exported weekly from Airtable as a CSV that is input into a MySQL database for further processing and sorting.  Airtable has some limitations in formatting the data for a particular CSV configuration, but MySQL is able to perform these modifications in an web-based environment.JavaScript & D3

The modified CSV data is used along with JavaScript and the D3 (Data-Driven Documents) library to generate dynamic HTML and CSS based off of the linked CSV data.

HTML & CSS
With a basic HTML page framework, the programmed JavaScript and D3 is able to generate the rest of the HTML and CSS to present the data visualization on the web page.


  

Position Statement: Open-Source Movement

General / 20 April 2018

In response to Eric Raymond’s The Cathedral and the Bazaar, I agree that collaboration in software development creates better software. This is not to say that open-source is the only way to achieve this because collaboration can also be implemented effectively even in a closed-source software development environment. Over the past few years, a growing number of closed-source software developers are integrating key elements from the open-source movement into their business and operational practices. Some of this is from a business standpoint to be more open and responsive to users, particularly in addressing support and development efforts. But a lot of it is a direct result of the value achieved through open-source initiatives. To me, the key is collaboration and finding a balance between the business aspects of valuable, proprietary information and maintaining an open dialogue among developers, testers, users, and others involved in the software development process.

Nevertheless, I find a lot of value in open-source projects: using them as reliable substitutes for costly alternatives, to learn how to write better programs, and to collaborate with like-minded people. One of the many open-source tools that I use is Shotcut for video editing. Like many other open-source projects, it is based on and supported by various other open-source projects. It also includes a feedback loop in the form of a discussion forum for collaborating and communication of issues and solutions, as well as a road map to outline and prioritize the development of features for upcoming releases. All of these provide an open communication among developers and users whereby everyone can contribute and be fully aware of challenges and progress, which support Raymond’s reference to “Linus’s Law”. Below are links to the aforementioned Shotcut resources:

I find it even more interesting to see examples of closed-source programs that effectively uses strategies from the open-source movement, such as Unreal Engine. Its discussion forums provide a venue for users and developers to collaborate and the marketplace resembles Raymond’s “bazaar” where various plugins and other tools are made available, some of which are free and others at cost. The developer’s community pushes and supports the further development of the software through its collaboration and open dialogue, more so because of the many users who are passionate about using it for their own endeavors. This mimics the open-source rule: “To solve an interesting problem, start by finding a problem that is interesting to you” (Raymond).

However, one of the most notable concerns revolves around security, which I believe is challenging to balance. Yes, data-mining user information hints at overstepping boundaries through surveillance and selling it to others, unknowingly to users. I have issues with that and, in fact, I’ve worked with a closed-source projects specifically to test this parameter before I decide to implement its use. And though I’m not an information security expert, I’m familiar with the many ways in which malicious intents can find their way through open-source environments. I believe one of the strengths of closed-source projects is the ability to manage security, particularly as it’s the utmost concern for applications used by the government, military, financial institutions, and the like. In either open- or closed-source environments, the threat of security flaws exist and barriers will always be attempted to be penetrated by entities. This is definitely a challenging aspect of software development and balancing critical elements like security.

The periodic releases of versions show the speed at which it responds to the community with requested updates and features. Several software packages like Unreal Engine include the option to document and transmit logs of activity when a critical event occurs in the software. The user can then include this “background data” in their communication to the software developer for troubleshooting. The users feel that they have been heard and are empowered to further contribute to the community. I’ve predominantly used one of its competitors, Unity, for several projects, but it wasn’t until recently that I used Unreal Engine and its community to develop a project. I quickly realized that its growth over the past years has been extraordinarily fast, a large part due to the devoted community.

Furthermore, in an effort to provide users an opportunity to expand functionality and customize as needed, several software companies allow for users to have access to the software’s application programming interface (API). While this is not necessarily the source code, it is a managed portal to allow for further development of the application by the user without having to rely on the the developer or software company to develop and integrate custom features into future releases. This has proven to be very effective in a lot of my work creating custom automation tools for closed-source software like Autodesk Revit. Plugins and features that are developed get promoted back to Autodesk and the Autodesk community for further refinement, and some eventually make their way back to the core program or become marketable tools by themselves. To a certain degree, even using Blueprint within Unreal Engine is an attempt by the company to allow users some freedom in customizing functionality without providing full access to the source code.

Again, despite not being open-source, the strength of such closed-source tools’ development environment arises from the open communication and effective collaboration strategies, all stemming from the open-source movement. And a lot of companies and developers have attempted to strike the balance of a market-worthy and profitable product with the “bazaar-style” development community.

Works Cited

Raymond, Eric S., The Cathedral and the Bazaar. Eric’s Random Writings. 11 September 2000, http://catb.org/esr/writings/cathedral-bazaar/cathedral-bazaar/. Accessed 03 April 2018.

Facility Management AR App & UI Design Concept

General / 15 April 2018

This proposed application is intended to provide a safe and interactive venue through which facility management in dangerous environments can be performed in real-time. Though it is more applicable and geared towards industrial sites and environments with complex building systems, it could be suitable for medium to large commercial and residential buildings. This augmented reality (AR) interface will use something like the Microsoft HoloLens, Meta 2, or Leap Motion head-mounted display (HMD), and the user interaction is to be driven primarily by hand movements recognized and interpreted by the HMD’s built-in cameras. Along with the clear display screen which allows for the user to see through the interface to the surrounding real environment, dedicated output panels in the interface display building system status, room information, and other relevant information, as well as opportunities for user manipulation of the building systems.

At the onset of its use, the interface with panels closed provides optimal visibility for user while still having quick access to menu items. The following layout depicts an overview of the interface with panel organization:

In a typical workflow, the user approaches a room in the building and locates the physical tracker image, in the form of QR codes or nameplates, for that particular room. The interface consists of a dedicated area for overlaying a tracker image that the HMD will recognize and attribute to the room’s corresponding information. It also uses the tracker to geo-locate a visualization of building systems and controls in the interface superimposed over the real environment. The interface then displays this room’s specific information, including any maintenance checklists and building systems pertaining to that room. Room-specific information allows the user to focus on its features and not overwhelmed with too much information all at once. Since the tracker is recognized only when the user enters a room, then the space on the interface for the tracker can also be used as an output window for other information or for viewing through to the real environment.

Spatial coordination for the user’s own information and the documentation that may be necessary for facilities management protocol is clarified through the display of a key plan in the form of a 3D model in the interface. The locator in the key plan helps coordinate the user to his/her location relative to the building and any building systems, which can be displayed in the 3D model. While the room information is specific to a room, this 3D model visualization grants the user access to the entire facility’s infrastructure.

As a regular facility management task, the interface is aimed to expedite and streamline the documentation of observations, issues, and relay of information to an archive, as well as to dynamically respond to maintenance on-site. Anomalies and maintenance performance checklists are displayed in the field report area of the interface based on the requirements of the room and the selected building system. Scroll bars expand depending on the number of line items included. Photographs can be documented through the HMD’s cameras and a microphone serves as another form of input for recording field report observations as individual line items in this section.

The system calibration section provides access to building systems applicable to the room in which the user is located. Examples of building systems include electrical, mechanical, plumbing, etc. For each building system available in the drop-down menu, different types of controls and analytics will become available in the interface. The intent of these controls and graphics is to make modifications to the building systems remotely and without having to interface with hazardous equipment on a regular basis.

Analysis of Digital Medium Characteristics

The procedural characteristic of this digital medium is through the programmed software that organizes building information modeling (BIM) data for user access and manipulation, as well as ties to the site’s communication network. Input by the participant consists of the hand and finger movements and tracker images recognized by the HMD’s camera, resulting in real-time information displayed on the interface. The participatory characteristic is further supported by the user’s calibration of building systems and field report documentation through the interface. The records of documentation and system manipulations by the user are stored, relayed, and recalled through the communication network to build onto the encyclopedic characteristic. Spatial reference is provided through the tracker recognition that coordinates the user’s physical location relative to the site in a 3D model. Furthermore, the augmented reality interface displays a coordinated digital environment superimposed over the real environment.


Application Interface

This sketch is an initial pass at coordinating utilities in interface within a limited space and maintaining visibility through screen.

Upon detection of tracker (QR code), room information, checklist, available building systems, and 3D locator become available for use. Checklist items are toggled when complete and building systems are selected from drop-down menu.



Field report mode allows for inputting issue name, voice recording, and documenting through photographs with the camera.

Calibrating building systems is through the various controls available based on the selected building system. Live analytics in the form of different graphs and charts assist user in evaluating building system status.


Game Art Methods - Project Postmortem, Part II

General / 26 March 2018


Scene Concept

My original concept for this course project was quite general and went through several iterations, but the research and mood board phase gave me a good foundation and kept me on track.  Even after the first few weeks, I would keep researching references online.  Pinterest turned out to be a really good tool for tracking image references since it can be organized into categories and quickly displayed without having to download every image used for a reference.

While the reference research went well and was very effective, I didn't sketch or conceptualize the proposed scene and assets well.  From the beginning, I could somewhat envision a scene and knew that the assets I chose would work together, but I didn't block it out early enough.   As a result, my scene didn't truly start developing until the 2nd half of the project.  By then, I needed to revise the landscape a few different times and create additional assets to complete the scene.  My goal was to stay true to the aim of the research and mood board, which I believe I kept consistent over the course of the project but a clearer vision of the scene early on would have helped even more.  Nevertheless, the additional re-working of the different parts gave me a chance to reinforce learning and try out new strategies.


Landscape

UE4's landscape tool is really great and has the potential for a procedural setup, but World Creator was the tool I used to develop the overall landscape, height map, and splat maps (used to define the areas for different materials).  The workflow began with World Creator because of its user interface and effectiveness to quickly make procedural edits and then sync the height and splat maps to UE4.  But the focus in World Creator was on generating a large landscape quickly.  The hero area of the scene was refined within UE4 and its own landscape tools.  Overall the process was very dynamic and effective, but there arose a few issues and resolutions that set up some best practices for future work.

The height map from World Creator was being transferred to UE4 as a single, large map. Visually, the landscape was created correctly from the height map in UE4, but it's large size became overwhelming when only working in the tiny hero area.  The better approach would be to export the height map from World Creator in pieces, with the distant areas at a lower resolution (meaning less sections and components) and then isolating the hero area to have a higher resolution.  Each of these areas would be a different landscape element in UE4 so they can be individually modified, but would visually look like the original large landscape. 

Another reason for separate landscape elements is to respond to the limitation in UE4 where each landscape component is capped to a certain number of materials.  My original landscape in World Creator implemented 8 different materials, but in applying them within UE4, the materials did not show up until I reduced them to 5 materials.  Further research confirmed that this is a limitation in the landscape material tool in UE4, but using separate landscape elements would permit materials to be applied to some and then other materials to other elements.  It would also allow for the landscape elements in the distance to use lower resolution textures and dedicating higher resolution textures to the hero area.

Tiling was really evident even with some texture edge refinement.  A good solution is to blend textures and offset them or enlarge one of them to blur the tiling.  This turned out to be very effective in the hero area where the landscape textures are viewed up close.  For the landscape in the distance, tiling was still a bit noticeable.  To alleviate this, I duplicated the landscape mesh and offset it a few inches above the original landscape mesh.  The mesh on top was then given a material with 2 different tile-able cloud textures driven by two different Panner nodes.  The cloud textures were set to be opacity masks and with the Panner nodes moving them in the coordinate system, the top landscape mesh now appeared to be a layer of dust or sand moving over the contours of the original landscape below.  As a result, when viewing this landscape from a distance, the moving "sand" clouds any tiling and also enhances the scene with a dynamic element driven simply by a material.


Assets

The low/high poly modeling approach worked really well for several of my assets.  At times it was a tedious process; coordinating the UV layouts, confirming Maya's Freeze Transformation, and then the balancing act of gradually working my way through intense subdivisions for the high poly models. But in the end, I think the workflow allowed me to optimize the low poly versions while still preserving the high poly detail.

One of the challenges was figuring out how to separate the meshes within the assets and assigning them to compact UV layouts.  This is something I aim to continue refining, particularly the way to balance UV layouts and their resolution.  In some assets, I attempted to cram quite a bit into them, but it required high resolution images, whereas other assets with only a single mesh could be a lower resolution.  Basically, I need to gain a better understanding of how the draw calls occur as assets are rendered.  This should guide me to better optimization between assets and their textures.

Building assets started out slow as part of the learning process, but after a few, the routine to optimize became typical.  Each asset's modeling is unique, but the cleanup and validation process has become consistent.  The part that really stands out to me as a critical tool is the Mesh Cleanup tool.  In the latter assets, I started using it throughout the modeling process a few times to address issues as they arose, rather than waiting until the end.  I would still use it at the end, but by then it was only a few minor edits, which is better than trying to retopologize the entire asset.  This process has really taught me to model better.

The high poly modeling can get really intense with a lot of subdivisions, so it needs to be gradual and strategic so as not to overwhelm ones computer.  But once the model gets to a point where the detail work can be sculpted, it's pretty fluid.  I do highly recommend using a stylus with tablet for sculpting in either Mudbox or ZBrush, just because getting into the details, there are times when it becomes evident that a sculpting action is not natural like the movement of a mouse versus the hand movement of a stylus.


Textures

Substance Painter is such an amazing and intuitive tool.  The parametric controls and layer organization allow for quick edits and testing out scenarios without committing to a certain direction.  A stylus and tablet are really effective here too because of the opportunity to draw by hand.  Another part of this process that I found to be strong was the maps generated from baking the high poly model onto the low poly model.  These various maps, such as normal and ambient occlusion, become the maps that can do the painting for you.  For example, trying to paint each groove in a surface by hand may not appear natural, but using combinations of the maps to mask the effect utilizes the actual sculpted information from the high poly model to define these areas.  And since it's relying on geometry to translate reactions to light, it's that much more accurate and realistic.  Furthermore, if an edit is warranted to the high poly model, the baking process generates an updated set of maps that quickly update the masks, rather than having to manually erase and paint the details again.

Even though I feel like I learned quite a bit of Painter, I know there's so much more to learn.  Another aspect of textures that I need to continue refining is their resolution.  Part of me was skeptical that a lower resolution version of a texture would suffice for the scene, so on many occasions I ended up using higher resolution textures.  A lot of this goes back to balancing the UV layouts with the size of the mesh geometry so that the corresponding texture is appropriately sized and doesn't negatively impact draw calls.

A particular concern of mine was the foliage.  The majority of the other textures were created from the baking process, but the textures I used for the foliage came from photographs of foliage and the cleanup of their edges needed to be better addressed.  From a distance, the alpha channels delineate the edges well, but up close, their unrefined edges and color bleed become evident.  For this, my aim is to investigate strategies to improve this cleanup process, as well as to make more effort to model original foliage rather than using photographs of foliage.


Materials

In UE4, use master materials with material instances spawned from it to expedite updates.  Even though I attempted to optimize the number of assets in the scene, there were still quite a few materials being applied throughout and updating each would be very time-consuming.  Instead, updating a master material and then letting it propagate to its instances was a very effective strategy.  I grouped the master materials into a handful of them, such as structures, foliage, and landscape, just to name a few.  All of the foliage would have the same material setup with translucency, whereas structures would be opaque.  I basically looked for aspects of materials that were consistent and could be grouped together.

Also, developing and using Substance materials with the Substance plugin in UE4 was great because I could make edits to the Substance instance, which would then propagate the updates to the each of the corresponding textures.  Even materials developed through Substance B2M allowed for these types of edits directly within UE4.


Exploration & Experimentation

After the initial setup of the scene with assets and materials, the scene just needed to be enhanced, so in exploring tutorials and guides I found a few that would help convey the story I was presenting.  Experimenting with UE4's particle system tool and unique material nodes exposed their potential for adding dynamics to a static environment.  The scene by itself is static and the camera movements add some level of dynamics, but there some ways to add subtle movement to a static scene without integrating full animations or characters. 

I mentioned the first under the Landscape section above, where the Panner node drives the cloud texture to appear like sand moving over the desert.  I brought water into the scene to complement the light brown color throughout.  Another strategy is to use the SimpleGrassWind node to give foliage materials some movement in response to wind.  While it works well on its own, the node does affect the entire mesh that its applied to, so the base of a plant or branch also moves.  Later on in the project I did find a tutorial that explained the use of vertex painting to coordinate the vertices in the mesh that could be defined to be static and still while the rest of the mesh would respond to the SimpleGrassWind node.  I definitely aim to test this strategy to enhance the realism of the foliage, namely the grass.  In close-up views, there are some instances where the grass roots show movement when they should be still.

In another exploration pursuit, I discovered the utility of the Camera Rig Rail and Camera Rig Crane to assist with unique camera cuts.  Originally, my plan was to use a single camera to pan across the scene, but in learning about UE4's Cinematics, I was able to take advantage of them to help guide the viewer through the scene in a specific way.  Ultimately exploration and experimentation played an important role in my learning process and I plan on continuing it.


Lighting Build Process

The lighting build process is very resource intensive and is dependent on the optimization strategies mentioned earlier, including the geometry of meshes and texture resolutions.  After the first major build of the scene, I had the Windows Task Manager open to track its progress.  The CPU was showing at 100%, so I made sure to close all other programs before the build.  It appeared to stall halfway, but fortunately, it processed without issue.  To help optimize this process, I used a Lightmass Importance Volume to contain the hero area and to be the focus of the detailed indirect lighting. 


Rendering

There are several parameters in UE4 that refine the clarity and resolution of the viewport, but of the several that I tested, the most noticeable were the number of mipmaps in each texture and the anti-aliasing setting for the project.  Lowering the number of mipmaps in textures forced them to render at a higher resolution, but at the cost of longer builds.  For the anti-aliasing, Temporal AA proved to be the best setting for both still images and animations.  The other anti-aliasing settings did create crisp edges in the materials of the assets in the scene, but they became so sharp that they became granulated in the dense textures, sometimes even appearing as glittering.  

Game Art Methods - Project Postmortem, Part I

General / 25 March 2018


RETROSPECT & FUTURE PLANS

First and foremost, I definitely aim to continue working on my skills in all of the tools  and processes used in this project to further optimize the workflow.  I might take a short break from this particular project to refresh my mind and to approach it again soon with a new perspective.  Some thoughts for future work include expanding the scene's hero area and working my way into visual effects and characters to further enhance the scene.  I think this scene could be the start of a very interesting story/game.  It's very gratifying to envision a scene and then see it come to life.  A lot of my previous and current work is in virtual and augmented reality, so I'm already thinking of how to turn this scene into a environment that I can virtually walk through and interact with.  Each part of the process was captivating in its own way, particularly because it allowed me to simply create.  I have a new found affinity for modeling and sculpting assets of all kinds, with a strong emphasis on optimizing them.  The foliage assets was probably the most challenging, yet I really want to be better at them because they're such a strong part of many scenes.


PROJECT MANAGEMENT

Throughout this experience, I found a few aspects to be the most critical elements to focus on first because they create the foundation for the infrastructure of files and processes.  I thought I had started the course project with a good organization system and thought process, but as I learned new tools and processes they needed to be refined.  So by the end, it was a bit messy, but it became a good template and learning experience for setting up correctly in the future.


Optimization

The various tools used in this project were significantly impacted by computer hardware.  Some are GPU intensive while others are CPU intensive and yet others are both.  Keep Windows Task Manager open somewhere on the desktop space to see which software is using resources.  Manage multitasking so resources are dedicated to specific tasks rather than trying to have several resource-intensive software open at the same time. 

Additionally, if the computer's fans start sounding like a jet engine, it's a good sign that the machine is running a pretty intensive process somewhere and should be monitored.  I was fortunate that my computer did not fatally crash at any point, but there were instances where it sounded and looked like it was stalling in the middle of an intensive process.  Save any progress before initiating such processes and test small files first then gradually increasing if necessary.  With this in mind, it became really important to keep model and texture sizes managed for an optimal workflow.


Organization

Moving content from one software to another requires a clear file and folder system.  It can easily become really complicated even for oneself.  There were a few times at the beginning where I had to test open several files to remember which file had the information that I needed.  As such, follow a consistent and clear file-naming convention, keep file locations stable, avoid duplicate file locations, and archive with a version control system.  In its simplest form, version control included naming files for archiving with dates in addition to their file name.  

Redundant backups were on the top of my priority list from the beginning since I had a previous experience that taught me the value of such resources.  I used Microsoft OneDrive and its automated cloud backup system along with an external hard drive with its own automated backup system.  The key is for them to be automated so its something that one doesn't have to think about; it just happens as work progresses.  However, at the same time, periodically check that they are processing correctly.  I encountered an instance where my file and folder system extended beyond the character limits of one of the backup systems and couldn't backup the files.

Maya's project folder structure is a good start for a template.  It's very capable of organizing working models, exported models, textures, and other supporting documentation all in one place.  I had one Maya project folder structure for each type of asset, such as hero asset, foliage asset, architectural asset, etc.  Subfolders within each would compartmentalize each individual asset and its textures and supporting files.  From Maya's folder structure, the content is linked to the Unreal Engine 4 (UE4) project folder structure.   UE4 allows for content to be refreshed from the source location.  This is a quick way to keep UE4 content updated after making edits outside of UE4.  UE4's folder structure also needs to be set up to isolate assets, materials, textures, particle effects, cameras, lighting, etc.


Streamline Workflow

Consistent and clear organization streamlines the workflow, particularly one that involves several tools and files.  In a quick glimpse, my workflow entailed:

  1. Maya (Low-res models, with some subdivision of high-poly models)
  2. Mudbox & ZBrush (High-poly model refinement)
  3. Substance Painter (High-poly to low-poly baking and texture development)
  4. UE4 (full integration and rendering)

Additional software used in the process included:

  • World Creator (Landscape height map)
  • Affinity Photo & Substance B2M (texture development)
  • Marmoset Toolbox (test rendering and model sheet presentation)


Continuing Education

Interacting with others in the course and seeing their processes made me realize that there are different approaches to production pipelines and tools.  One could effectively accomplish the same tasks with half of the tools that I explored, but my incentive to learn new industry tools was to evaluate their effectiveness in optimizing my production.  I tried a few early on like xNormal and KeyShot, which were effective, but eventually I pursued other tools that integrated better with my processes. 

Learning the tools themselves wasn't too bad because my focus was on learning the process, which typically translates among comparable tools.  For example, ZBrush and Mudbox are basically comparable, yet I found Mudbox's user interface better suited for quicker modeling with stencils and stamps.  However, my experience with ZBrush has shown it to be more effective as a custom sculpting tool because of its various brushes and the way it reacts to sculpting with a tablet.  Ultimately, I see learning as a constant part of my workflow and even more so because this industry is so diverse in its tools and production pipelines.  The lesson for me here is that focusing on the process for a task rather than the tool itself allows one to quickly pick up any tool.


Documentation

Complex tools and processes require constant documentation, particularly for a novice.  Throughout the project, some tasks became repetitive and easily retained, whereas other tasks were not as frequently performed and it's these that need to truly be documented.  Take notes, record short videos, and include any links to sources for future reference.  At least for me, relearning tools and processes are an accepted part of the workflow, but my documentation makes it easier and quicker to recollect them without having to research them from scratch.  The process of documenting also reinforces it within me, so it's sort of like repeating the task as part of learning.


Postmortem continued in Part 2...