Home

Resume

Blog

Teikitu


Code Ramblings
Experience in the Fire


11/10/12: Catmull - Clark Sub-Division

Fun.: Some Nights



12/13/10: Conflict Management

There is a lot to be said about this subject and I will probably update this entry a couple of times before calling it a day. There are many different methods that people advocate when dealing with conflict, some of which I will discuss later for comparison. My method is fairly simple. The first is abstraction - listen to the person who has the problem but prevent yourself from creating stories or rationalizations of what they are saying. Simply listen to the content of the message - ask clarifying questions as necessary. Avoid assumptions (this again is part of the "do not make up your own story"). Continue to ask questions until you have the whole picture. Be open to criticism - you only grow and develop when you can discover weakness and improve things that you have been doing wrong. If its not related to yourself, make sure to give yourself time to reflect on what the person has expressed - make sure they understand that you will think on what has been said. Always try to remain objective about the matter - the more emotional you are being, it is likely that the more unfair you are being to the person. Be open and honest.

I am not trying to say that the above is something that is easy to do, or does not take a lot of practice. Of course, it is not easy or there would be much less conflict in the world. But it is a skill that can be gained through focus and determination. Whenever you have been in a conflict, or a difference in opinion (same thing different nomenclature), make sure to review your actions, the impact on the other person or people, and the results. It is always possible to learn something, and keep in mind that "winning" a conflict is sometimes a long term loss - specifically if you do not see the negative impact it has on the other people involved. Watching for impact can be hard to do, and more so if the people are non communicative. This is also a skill that takes time, effort and practice. Remember that in the long term, your ability to collaborate and use these soft skills will dominate your individual technical skill. They are just as important as learning new methods, and algorithms.

Take Away: There is no simple way to manage conflict. Try your best to look at it cleanly, and remain open and honest. Always create an environment where others can feel safe i discussing any subject free of reprisals or repercussions.

Rankin Family: Fare Thee Well



12/06/10: Recruiters

I find recruiters to be interesting. I have heard most of them describe themselves to me as external contract HR workers for multiple companies. I find that hard to reconcile with the fact that they are using their client (us) to make their wages. Is their most important job filling a opening at a company or finding the right company for their client. Is it their job to sell a position to the job seeker, or sell the job seeker to the company? I am not sure there is a good answer here but it does make dealing with recruiters a difficult proposition. Having worked with them to some degree on both sides of the fence, I was never happy with their position in either case. As an employer I started to ignore entire batches of people because of the person representing them. I found the quality of people the recruiter was generally presenting to us to be sub-par and assumed that would continue to be the case in the future. As recruiter client I have had cases where jobs were pushed on me that I did not want (and that was made clear), and multiple cases where the recruiter did nothing during the entire interview process. I guess I feel that if you are going to get a large percentage of my salary has a recruitment fee, then simple pre-interview work should be done to help to their client. Information to general practices in the interview, expectations etc. I am more than happy to do this myself, but then I have to wonder why I am bothering taking the time to work through a recruiter in the first place.

Take Away: I think recruiters should be more involved with marketing their client, than acting as an external HR department. I think they need to work with more select individuals and stop the practice of simply shot gun sending of as many people as possible to as many studios as possible.

Don Henley: Actual Miles



11/29/10: Illusion of Not Enough Time

Small break from my list of four topics but this is one of the most amusing statements I have heard it used repeatedly while working in games. Every time a manager has used this phrase on me, it takes more effort each time to prevent myself from exploding into laughter. Its almost always used to explain why action will not be taken on something that the person in charge at heart does not want done in the first place. The other possibility is a real time pressure is making it feel like there really is no time to work on something. However, in most cases that I've heard it used the results are that more time is wasted trying to be "fast" and avoid the proposed work than if we had simply moved forward with the original request.

The example I have heard the most often when this phrase is used as a defense is explaining why a particular process is not being followed. "There is no time" to follow XX process, and we have to just move forward and get the work done. If that phrase didn't remind you of a meeting that made you cringe, then you are a very lucky developer. I've been in the room so many times for this statement that I don't even flinch any more. My only reaction is to make sure that my schedule is clear for the inevitable crash-and-burn on the first real deliverable that uses the results of the meeting. (I suppose I should be taking vacation time instead, but I always have had a strong feeling of responsibility in work which I am involved.) It is important to always pace yourself and not to fall into the trap of "just getting it done". Process has been created to prevent unneeded work and more importantly wasted work. This is even more important when there is a time pressure, or resources will be spent uselessly that are in high need and demand.

Take Away: If you find yourself thinking that there is "not enough time" - stop, think about why this defense is coming to mind and fix (do not avoid) the problem.

Taylor Swift: Speak Now



11/22/10: Studio Management

In my last blog posting I talked about some of the issues with the explosive growth in the game industry. Now I want to talk about how to manage this growth. There is no hard and fast rule with how companies deal with organization of studios and creating management teams. The same title at different companies can have very different responsibilities and requirements. However, I have found a reoccurring problem happening from two different directions but from the same driver. There seems to be a need to have a central authority figure for all decisions in controlling a department, and in almost all cases this is usually the manager. In most cases the needs of a development team are in direct conflict with this prevalent attitude, and is usually defended by attesting that the managers are the only people who understand the whole scope of the problem.

I think that in most cases problems should be resolved at the technical level. The current model forces technically competent people into manager roles so that they can help direct the project. However, the overlap in skill set between technical capability and management capability is not an easy intersection and is not at heart really needed. If it was accepted that critical decisions should be made by leads, and that they should be part of the driving process then many of these problems could be resolved. The core of the matter is whether the leads works for the managers or the managers work for the leads and in almost all cases that I've seen, whether these two roles are independent positions or to be held by the same person. I have found that its best for the department managers to be separate people who work for the leads. This way schedules can continue to be maintained, dependencies tracked and scheduled for tasks, collaboration with other teams fostered and general hr is all done and kept off the plate of the technical lead allowing them to do their own set of tasks. I personally feel that leads should spend a large amount of there time working with and mentoring those people that are part of the team. It is my strongest opinion that department leads need to have two traits: lead by example (monkey see, monkey do syndrome); they should never ask people to do something they are not willing to do them self. Second, I think that they need to see the role as one that the centre link of collaboration for the department. Their primary drive should be to work with those in the department helping and/or mentoring when necessary. The other aspect of their job is to setup the process and framework to empower the individuals in the department to exceed and to achieve beyond their own potential. I would suggest a work ratio of 20% framework, 20% collaboration, 40% mentoring and 20% project driving and development. You notice this is a complete slate of work that keeps a person very busy, and productive but leaves little time to do the extra management tasks that I attributed to the manager role. If we were to aggregate these roles then some if not most of this work would never be done. This is one of the drivers for the disenfranchisement that I discussed earlier. The lead/manager simply does not have the time to spend and work with the people in their department or feature teams.

Take Away: Decision power within a department should rest in a department lead whose responsibility is to lead by example, and participate in the work being done by the department. A separate manager should report to the lead, whose job is to maintain schedules, monitor dependencies and keep communication at a high level within the team and between teams and departments.

Owl City: Ocean Eyes



11/15/10: Investment

As it turns out, as I was putting together the subjects I wanted to talk about for the next few of weeks - a theme became obvious. I am going to spend the next few weeks talking about the game industry as an industry and a place of employment. The four main talking points will be: Keeping a team invested in their product and the studio, practical approaches for studio management and its impact on the decision path, the impact of recruiters with their particular bias, and finally a discussion on conflict management. Most of this stuff is not new and has most definitely been discussed in many forums, but since a blog is about talking from you own point of view I figured they were good subjects to cover. The thoughts are fresh in my head as I have recently taken a few Microsoft courses on related materials which have at least been informative and in some cases have also been instructive. The refresher in thinking about these issues is what lead me to this particular sequence of blog topics.

The game industry stumbles and suffers from reoccurring issues both at the macro and micro level. I look at it the same way I look at fractals. We have the same equation creating the same pattern at the macro level within the game industry as a whole and at the micro level of individuals studios. This equation has to do with the issues caused by the explosive growth of the industry, of a project team and the equally explosive requirements in terms of data to produce a game. Early developers (or current mobile developers) can make a game with less than twelve people. This makes communication easy and responsibility enforceable through social mechanics. Everybody is in it together and there is a real sense of investment by a team because of the direct connection between work and product. This connection and investment is easy to erode or in fact have it be completely absent as team size grows. More detrimental is the separation and distancing between the developer and the product. People tend to out perform and produce when they have a direct and visceral connection to the work - when they are capable of personalizing it. With the larger teams this has been harder to achieve and the attitude of the work being only a "job" has became the prevalent mentality.

Combating this attitude is not simple or easy. More importantly there is no real solution or fix to the issue. I personally think that we have to admit that there is an element of this change that will be inevitable. Large team sizes will mean that some people will be more distanced from the centre of the project. The key here is to make them feel invested in the team and in their departments part of the project. This is the responsibility of the team leader and the feature leader, making the choices for these people so very critical. We have to make sure that the people who are leading (not necessarily managing) the group and project are capable people with sufficient soft skills so as to inspire trust, loyal and dedication. Proper positive feedback needs to be part of the work cycle at the company. Another way to increase investment is to group people together by way of feature development as opposed to the department level. In this way each cell can have a fairly short development cycle and feel a sense of accomplishment. This will also increase their ability to collaborate between fields and departments. However, this needs to be done carefully as this is only seen as a positive by certain personality types. Others will see this method as creating a disjointed and fragmented view of the project and increase the risk of creating significant integration issues (all true - just some things that needs to be controlled and managed).

Driving investment through division of labour either by feature or by department alone will not dissuade the separation between worked and project. The other key element is communication. People should be getting communication about the state of the project (honest assessment), future plans and how things are interacting. I would highly suggest public large team viewing of the project during development - specifically the major builds that are put together for the publishing company (prototype, vertical slice etc.) These should be celebrated and not ignored - more importantly this will help highlight the importance of the deliverable to the team outside of the contractual responsibility. Also, there is a pressure generated knowing that the work will be presenting to a collection of your peers. If the company has multiple teams, the entire company should be present and these showings - rent out a movie screen and give a state of the project presentation and showing. Weekly update emails created through combination of department lead information and managers should also be done as part of this communication process. Whatever method is chosen (web site, wiki, emails) the key thing is to maintain it and keep the process going no matter what else is happening at the company or the project. It is easiest to let things like this slide when they are most important because of time pressure.

Take Away: Explosive growth in the game industry has created macro and micro problems. At the micro level, people have been disenfranchised from their products. This is partly inevitable. I suggested re-investing some people through feature team compartmentalization, through departmental focus and development and through regular communication patterns.

Chess in Concert: Live from Royal Albert Hall



11/08/10: Movie and Game Comparison

The game industry is relatively new and is still trying to establish a professional working model that works for both employer and employee in the modern working environment. It uses a studio model, similar to the movie industry but has so far eschewed (for the most part) from the feast and famine approach of that industry. Game companies tend to hire people full time and not just on a contract or per project basis - with only limited hiring flux outside of QA teams during development. Thus, in many ways the comparison between the two is not very fruitful; given that they are both media (entertainment) industries that developed in the last entry, we do find some fruit on the tree to make comparing and contrasting them an interesting exercise.

Feast and famine is a term that I will use often when discussing the movie and game industries. Popularity of a game is a self-feeding growth mechanism. The more popular a game is and thus, the more people are playing it - the more likely they will get there friends to get the game to join them in playing it as well. This is specifically true for multi player games. The best example of this phenomenon are the music games. At their height they were a billion dollar industry (for about a year) and this year they will probably not even break four hundred million. It was able to reach such heights because of the codependency relationship in multiplayer games and the need for each player to have their own copy. There reaches a critical mass where the relationship becomes like a pyramid scheme, where more people are convinced to make the purchase because of the existing number of people who already have a copy of the game. We get large number of sales. However, if you do not reach critical mass then you will have orders of magnitude less in unit sales. In this case you will often struggle just to break even from the development, marketing and manufacturing costs. This is very similar in mechanism to the movie industry that sees only a few of there movies move on to be block blusters. There is a difference in scope, though the difference is decreasing with time. A block bluster movie can cost somewhere between $50-150 million. A corresponding game will be in the range of $15-25 million. Thus, there is a lot more money to loose if a movie flops at the theatre. However, they have multiple sources of revenue and even if a movie fails at the screen, there is a good chance it will break even from video, pay-per-view and direct-to-view income. For game companies, they really only get one try as there is no alternate revenue stream of game software that pays the publisher (there is of course the secondary game market but none of that money is seen by the publisher or developer of the game). The risk-reward comes out to be fairly close because of this difference and it is one way in which the two industries are the same.

Another interesting similarity between the two industries are their dependency of the creative capability of a few people within the project to create the framework for the entire product. A movie is largely propelled by the director. In the game industry that responsibility can be a little more diffuse but that dependency on those people is the same as that the movie industry has on the director to drive the project forward. The difference is that the movie industry has used this dependency as part of their marketing. The game industry has largely tried to hide the responsibility of individual people towards the project. The exception tends to be people who have either large shares in the company or have been the industry for so long its hard to hide their presence. Game journalism will talk about and interview the key people on a project but there is little public recognition of these people. The reasoning behind that is purely business as companies obviously want their name attached to an IP and not the name of a person who could leave the company and join the competition. There is sense to this but it is something that clearly marks a difference between the two industries. The movie industry practically celebrates the contributions of the individual publicly and use it as part of the public presence of the product, while the game industry does its best to hide individual contributions.

There are obviously many more differences but I thought this was a good couple of samples. Will probably talk about it some more in the future.

Joey + Rory: The Life of A Song



11/01/10: Movie and Game Collaboration

Interaction between the movie and game industry comes in three ways: movie companies that try and establish gaming studios, movie auteurs who become involved in game projects and then game companies who try to establish effect / movie related studios. The first has been met with arguable successes a few times but also with definitive failures. Lucas Arts is an obvious example of the arguable success. The games they have made in their early history were significant and made large contributions to the growing gaming industry. (My favorites are Day of the Tentacle and Full Throttle.) However, past those golden years (after they restructured and let go most of there old 2D / adventure staff) things have been increasingly volatile at the company. They go through regular waves of growth and shrinkage - the most recent during this month. The games are still well received but at the same time the company has become a point of concern for people working in the industry due to the cyclical pattern of their studio employment. While Lucas Arts has been a commercial success, I would argue that due to their method of employment, we would have to temper that success with concern about it as a place of stable employment. I have heard of other attempts but since I have not heard of shipping titles, I am assuming that things did not go well for the most part. Digital Domain started a gaming studio and I have not heard of anything from them in some time. To my knowledge, Dreamworks has been involved in some projects but only as related to their own IP. Disney at least is a player at the financial level (investing in various larger groups) but directly as a corporation has not made much traction outside of their own IP (and arguable other people have done much better with the IP than Disney themselves). Overall I think the problem is that the expectations are mismanaged and there is a belief by this movie studios that there should be a significant amount of skill cross over. However, both in terms of the creative and in the technical development, there is little cross over in skill set. The result is tumultuous and generally leads to projects being terminated. I think there is potential there to be tapped but care has to be made to mange the expectations, understand the limited skill set cross over, and determine if there really is sufficient business reasons for such a lateral (and significant) shift in growth direction.

Most of the time when movie personas are involved in gaming project it is primarily for marketing reasons. When it becomes more involved than that the skill set cross over and expectations generally lead to problems. I am not surprised that EA had to cancel the Spielburg game recently. I have great respect for his movies but without firm experience in the gaming industry it is hard to set expectations and manage those expectations for what can be done in this medium. I have had friends work with known authors for game scripts, and other than providing a back history they usually do not have the background for generating the massive quantity of text required for interactive fiction. Specifically, novelist are much more used to controlling the level of detail, pacing and world description that is hard to do when the player is such an integral part of controlling the experience. There is definitely a different skill set required for creating game content. On the other side I have worked on teams that worked out very well with outside talent. On one of my projects we received a lot of very useful art direction feedback and information through out the development of the project which allowed us to make a high quality game. I guess my feeling on the subject is that there is a lot of possibility for such collaboration but they need to remain collaborations. People coming in from outside of the industry should be looking to work with and through people with that experience rather than directly on the project itself.

The third way that the two industries collide is when game studios start working as effect houses. This is a more recent development, pursued most actively by Ubisoft. They were very vocal about their contributions to the Avatar movie and have established a plan to have their new Toronto studio does cross-blended development. I think this will prove to be interesting, but I have no real feedback on this right now.

Take Away: There is strong potential for collaboration between movie and game industries. These relationships need to be managed well, with focus on controlling expectations. The current track record for these relationships has been fairly negative but should be improving. We are starting to see game studios starting to work in the movie industry, as opposed to the previous more common practice of the movie industry working in the game industry.

Idina Menzel: Still I Can't Be Still



10/25/10: Apple Keynote

I was watching the recent Apple Keynote (Oct 10, 2010) and I was rather surprised by some of the numbers. I have to admit that I have not been paying that much attention to the market, or apple hardware in general but I was astonished as to there current market share compared to their historical trend back in the days when I did pay attention (1980-2000, basically when I had the time, and was not working). Personally, I think one of the reasons that Apple hardware is doing so well in the current market is normally overlooked. Previously, the closed Apple platforms were a major issue because the rate of change in the computer industry was so very high. I won't speak for anyone else, but its has been some number of years since I have had to upgrade any of my computers to support the new version of productivity software like I did in the "old" days. We have reached a point in hardware where the inability to upgrade (change out the mother board or plunk in a new CPU) is not a major problem. For that matter with the way that CPUs and memory are so connected now, its almost impossible to really upgrade a computer without replacing all the major components. All current generation video cards are more than capable of handling the UI requirements for the OS and normal software - so that is a non-issue. This is the time and place that Apple computers are so well suited - when people are looking for stability and security in there computer decisions rather than some ephemeral ability to upgrade that would have a marginal or no impact on their actual use of the computer. The other reasons that people talk about are well documented but I will mention quickly: customer service has been great and deservers the reputation, a large part of there increased market share is due to there strong portable offerings, and of course many people have been exposed because of the raving success of the iPod and iPhone. Many people have slowly moved over to the Apple eco system. Myself, this is being typed out on a 27" iMac, and I am/was a dedicated DIYer when it came to my computers. Its still how I put together my PCs. I do retain a PC for my development (xCode sucks hard, makes my skin crawl really), and because its the environment that I most comfortable working inside. But I find that the Mac environment is not the waste land that I thought it was back in the 90s. My first Mac was a Mac Mini server and that thing is a great piece of hardware that I use as my Perforce server and iTunes server. Power consumption on it is great, its quiet - pretty much loved the thing when I plugged it in - there is something to be said for hardware-software integration and a company that works in such a connected way.

Kevin Chesney - Hemingway's Whiskey



10/18/10: CMake

It has taken me a long time, but I have moved over to using CMake as an pseudo-build step. The basic premise of the system is that it will generate platform specific development files for the particular flavor of development environment available on that platform. For instance, it will generate xCode projects when I am working on the Mac or for the iPhone, but it will also generate Visual Studio files for when I am working on the PC. However, for obvious reasons, it must deal with many of the issues that come with having to support only the lowest common denominator in terms of basic configurations. Specifically, if you go this route, multi-platform support is not supported. So you will not be able to have a single solution for both your x64 and x86 code when working with Visual Studio. I decided that this was acceptable once I started having to work between Visual Studio and xCode as part of my weekly process to validate code across both platforms and different compilers. I could try to maintain two sets of projects, but this starts to become and incremental nightmare. For instance, some of the best ways to develop SPU code is by running Linux on the PS3 (I have retained a single PS3 retail kit at the correct version so that I can continue doing this - I wanted to buy a slim version anyway and it was a good excuse). I would of course like to be able to compile at least some of my files and systems easily for this, and that would require maintaining makefiles in parallel as well. Now we are up to three different project definition files that need to be maintained. Well that is not going to happen. The final thing that pushed me over the edge was reading someone else's blog (forget the name actually) where he called out solution and project files as intermediate files - that there only real purpose was to allow the user to interact with their source files. Once you stop thinking of these files as part of the source, but instead just a management layer - then making the move to where they are generated from another process is easy to do.

Everything is not roses though - I had to make some changes to the CMake source (its open source) to provide all the support that I needed. However, this was easy to do as much of there configuration is based on a STL map, and the configuration files can add arbitrary text properties to a file. This allowed me to fix the few things I needed to add by creating my own (and new) file properties. My specific problem was needing to turn off the use of PCHs on a per file basis (instead of the project level). I need to do this since most of my project is ANSI-C standard, but I was unable to compile the Windows header files with these settings as they depend on MS specific extensions. The solution was to compile the three C files that require those headers without the use of PCHs so that the rest of the files would use the ANSI-C standard PCH. The other issue was that CMake does not differentiate as well as Visual Studio between source, intermediate and target directories. I had to expand some support so that my compilation environment would be properly supported.

Owl City - Ocean Eyes



10/11/10: Project Organization

I find it remarkable that in most places that I have worked people have spent little time or thought about the actually file layout of their projects. Over time I have developed a rather strong preference for the file layout that I use because of the number of times that other methods have created huge headaches for me at the worst possible times. Specifically, I am a strong believer of out-of-source compilations. Let me start out with an explanation of the setup used by TGS. It pretty much follows that standard Unix model. From the root directory I have two source directories: /inc for include files and /src for the source files. Each of these contain sub-directories as necessary for each system, but only the root for each of these paths is included in the compilation settings (so /inc is part of the search path but /inc/TgS COMMON would not be). This means that users need to relative path sub-directory stored files, but I wanted to retain the clarity of where files were coming from during the compilation phase. There are two intermediate directories: /prj holds the project and solution files that are generated by CMake (see above) for interacting with the source files and /obj that contains all of the compiler intermediate files. Finally, I have two target directories: /bin to hold all executable files and /lib to hold all static libraries. Shared libraries or DLLs would be stored in the /bin directory but I currently do not use them in any of my projects. I have three non-compilation related directories: /web holds the contents of the TGS related web content for this web page, /tst holds the testing environment and its files for the unit tests for the solution and /doc for documents related to TGS and studies that I have done in regards to performance profiling to determine some of my coding decisions.

Well, that is all well and good, but why is this a clean system that in-source compilations or just a random collection of files. I have found it necessary at different times in my career to have to do various full-project diffs and changes, or the need to move entire code trees around do to major code initiatives. Every time this happens, the GBs of data attached to the intermediate files because a huge slow down for all of these operations. It is not even always desirable or easy to remove all of these files. Removing them would require a complete recompile the next time you use the project and if this process is iterative then you've added significant amount of time and work to the process. I have also ran into many problems with compilers related to PCHs and incomplete cleans creating weird and simply wrong compilations. The ability to clean the environment by blowing away the one obj directory is so nice and easy that it would be sufficient reasons by itself for setting up out-of-source. There is also a good psychological reason for separating things out into separate areas (like your lights and dark when doing laundry). One final reason I found was that it did simply the porting process when I needed to move files between multiple platforms, and not having to individually clean through object files made it an easy process.

Sugarland - Incredible Machine



10/04/10: Skill Gap

There are significant generation changes in programming (revolutionary) and then the more normal progress (evolutionary). The game industry is a little odd in that we can get stifled or stagnant on a particular generation of technology because of the miss match between technology change and the console generation. The transition from 2D to 3D was a major shift that many programmers were never able to meet, and there was a large shift in the industry. The move to multi core processing was more evolutionary and in many studios the need to even be aware of the requirements for working concurrently was isolated to a few programmers working at lower levels. However, it is my belief that GP GPU is going to be another major shift. The major complaint about programming on the PS3 was that to achieve maximum success it was necessary for many programmers to be able to program and use the SPUs. The next generation of consoles are going to leverage GP GPU algorithms and breaking the standard render pipe. This is going to require rethinking how GPU computational resources are used but more importantly is going to require knowledge of how to setup and use GPU jobs and tasks to get the most out of the new generation of hardware. For people who found the SPUs an issue – this will be much worse IMHO. Just as large a problem is how much of a gap there is being generated between the next generation change and the rather significant changes we've seen on the PC landscape in terms of GP GPU and computation in general. The skill gap that we are creating in the industry is significant. I think the companies that come out of the start of the next generation well will be the ones that enforce the culture changes required working on this hardware now (can use the SPU as a basis for skill growth).

Trace Adkins - Cowboy's Back in Town



09/27/10: Batch File Magic

I have always had only a fairly loose grasp on the DOS batch language. Whenever, I needed to do something I would spent large amounts of time either with the old DOS manuals or later on, online looking for examples. I only ever used it lightly, primarily because I rarely needed to do anything in DOS itself. However, while working at Obsidian, the Chief Technology Office (Chris Jones) wrote the initial build system in DOS batch and I ended up having to debug, maintain and extend the system. It was interesting and sometimes challenging to get the language to do what I wanted. Thankfully, the command line interpreter in NT has some really useful extensions to the standard DOS batch that making using it a little easier. For my own sanity here is an example set of batch files that I use for processing all of the files in my source tree to generate the resulting html found in this web page.

update_web.bat
@echo off

set BATCH_WEB_ROOT_PATH=%CD%\web
set BATCH_ROOT_PATH=%CD%
call process_directory "%CD%" "inc"
call process_directory "%CD%" "src"
set BATCH_ROOT_PATH=
set BATCH_WEB_ROOT_PATH=



process_files.bat
@REM Execute the command in the first parameter for all files
@REM giving the file as the source - and its parallel web
@REM location as the dest
FOR %%F in ("*.*") DO (
    %1 "%%~nxF" "%BATCH_WEB_ROOT_PATH%\%%~nxF"
)


process_directory.bat
@REM Check to see if the web path exist parallel to the
@REM directory, create it, and store it for the file batch
if not exist "%BATCH_WEB_ROOT_PATH%"
    mkdir "%BATCH_WEB_ROOT_PATH%"
pushd "%BATCH_WEB_ROOT_PATH%"
if not exist %2 mkdir %2
set BATCH_WEB_ROOT_PATH=%CD%\%~2
popd

@REM Recursively execute this batch for all directories
pushd %2
FOR /D %%F in (*.*) DO (
    call "%BATCH_ROOT_PATH%\process_directory.bat" %1 "%%~xnF"
)

@REM Process this directory
call "%BATCH_ROOT_PATH%\process_files.bat" <insert command here>
popd

@REM Restore the previous web root
pushd "%BATCH_WEB_ROOT_PATH%"
cd ..
set BATCH_WEB_ROOT_PATH=%CD%
popd

Jewel - 0304



09/20/10: iPhone Development

I don't have any comments this week on coding suggestions. I have spent most of the time working on having my base unit tests execute and pass on the iPhone. It has been interesting, and it has proven that many of the decisions I made when making the engine have proven to be making my life fairly easy in getting this all to work. I already have a layer to support the current generation of consoles - so used to writing code that supports the three platforms it comes be a hard habit to break. Adding support for the iPhone (so far) as only required creating a few wrapper functions for some of the objective C functionality, stubbing out the vector library (pass all access to the scalar library), and implementing the bass OS platform functions. I have it at 90% pass rate right now, including all the threading tests. The lock less stuff worked pretty much right out of the box which was nice to see. I don't have much of a take away on this one, other than correctly architected code should always be easy to support on new platforms - try not to make too many assumptions about the hardware platform or you could be in for a lot of pain when it comes to integrate the new platform into the code base. The other interesting aspect was that my life was as easy as it was because this version of the engine is Ansi-C, and I had integrated CMake into the build process so that creating new platform project/make files was handed off to that system. It worked out great - though there were a few local changes to cmake.exe I had to make for MSVC to setup everything the way that I wanted. I can't wait to get the IO and rendering tests working now - it will be awesome to have a unit test running simultaneously on the iPhone, iPad, and my PC.

Linkin Park - A Thousand Suns



09/13/10: x64 Registers and Calling Convention

There are two principle aspects of the x64 architecture for programmers. The obvious change is a flat 64bit memory addressing capability, but the second is a little more interesting - there are now 16 64-bit registers available on the CPU. This means that by default the MSVC compiler uses the fast call semantic for compiling 64bit programs. This convention will place the first four integer sized parameters into registers (RCX, RDX, R8, R9), floating point parameters into SIMD registers (XMM0, XMM1, XMM2, XMM3) and the remaining values on the stack. Return integer values are placed into RAX and floating point return value in XMM0. This means that 64-bit applications need potentially less stack manipulation (memory copies and marshalling into registers) than their older 32bit cousin. However, there are a few things to keep in mind - the stack pointer itself must be 16 byte aligned whenever calling a function. For example a parameter-less function needs to put the return address on the stack, but this is only an eight byte value. Thus, the stack will have to be padded by an additional eight bytes to meet the function call stack alignment requirement.

Sara Barailles - Klaidoscope Heart



09/06/10: Lockless Programming and Cache Lines

Having talked about the advantages of lock less programming, now there are some things that need to be taken into account when doing the data design for these systems. Primarily, one of the major factors to consider is that any interlocked/atomic commit to a variable by necessity must invalidate the cache line on which the variable is stored. Depending on the usage case for the lock, this may require isolating the variable in question from other data by either padding the structure or by keeping the lock independent of the data itself. For example:

struct

{

SpinLock m_Spin_Lock;

int m_iData;

}

Bad_Layout[101];

When we go to use the spin lock, which would require at minimum an atomic write operation it will invalidate the entire cache line that contains the spin lock. On some platforms (consoles) a cache line is 128 bytes, which would easily encompass the entire data member. So this guarantees a cache miss if we were to attempt to access the data member immediately after using the spin lock. Alternatively, we can then use a SoA approach to the problem and isolate the two members.

SpinLock g_aSpin_Lock[101];

int g_aidData [101];

This seems to be better since using the lock will no longer cause the data to be cache flushed. However, we now have a different issue - primarily the global memory layout may still cause data (or other variables) to be on the same cache line as the terminus of the spin lock arrays. More importantly, assuming that the spin lock is a standard 64 bits (8 bytes), then we will have 16 spin locks per cache line - thus, creating a possible source of contention for multiple threads attempting to lock/free items from this pool of locks. Also, keep in mind that the standard synchronization primitives automatically execute memory barriers on most platforms. It is good policy to do something similar after modifying data elements that are access controlled through atomic operations or it might create an issue where the following thread that acquires the access-control could view stale data.

Take away:

  1. There is no silver bullet into concurrent programming - knowing your usage pattern and design for it and not an unknown general case is key to having a performant system.
  2. Always take care with your data design - and keep in mind how that data is marshalled between memory, cache and registers.

Katy Perry - Teenage Dream



08/30/10: Lockless Programming

There is a lot of talk and discussion about lock less programming but the reality is often lost in confusion and lack of specificity due to nomenclature. Lockless programming is a vague term that encompasses many methods and algorithmic approaches to multi threaded programming. It is most often used to imply the use of atomic operations in lieu of the standard synchronization primitives. However, it is not specific in regards to detailing if the algorithm is lock free (a thread is guaranteed to resolve in a finite number of steps) or wait free (all threads are guaranteed to resolve in a finite number of steps). In many cases, algorithms that are described as being lock less are simply re-implementing the standard synchronization primitives in atomic code. For instance, the standard critical section by Microsoft can be configured to spin on the lock check for a defined number of iterations before sleeping (context switch out). This is no different than the many spin-locks that are custom written and integrated into a "lock-less" implementation.

My expectations when using and implementing a lock less algorithm are the following:

  1. The algorithm should be lock free (a thread is guaranteed to resolve in a finite number of steps)
  2. In most cases, I want the implementation to guarantee order of access (it is unclear if the standard primitives do this)
  3. The implementation is scalable within the expected number of concurrent executions (8-32)
  4. Implementation needs to guarantee order of read-writes of the controlled data (keep in mind that the standard primitives all integrate memory fence / barriers in their execution)
  5. Unnecessary data flushes are kept to a minimum (for instance, atomic operations will invalidate the entire cache line where the variable is stored)
  6. Spinning should be done in such a way as to free the CPU for use by other threads / hyper threads. Yielding (context switch out) should never be done except as a fail case.

Leona Lewis - Echo



08/23/10: Inline and Non-Inline

Performance coding is always a balance - between execution speed and resources consumed. Even achieving the desired execution speed is a balance as the increase in code complexity to achieve a certain optimization but at worse defeat the desired speed increase form the change. A general method to improve execution speed is having a function inlined - compiled directly into the calling function. Let's consider the work that is normally done when calling a function:

  1. Marshall the parameters for the function onto the stack
  2. Obtain the address of the function, and jump to the location.
  3. Construct a stack frame for local variables
  4. Perform the function execution
  5. Deconstruct the stack frame
  6. Marshall return values into the expected return locations (stack, or registers)
  7. Return to the call function

Having the code inline, compiled directly into the calling function, removes all the steps except for the actual performance of the function execution. The parameters are used directly, though it is possible copies will be made on the stack if necessary (though this will be part of the stack frame already created by the calling function). The return values are directly available to the calling function and will be used in that way. There is no jump to a new execution pointer, and no need for a jump back. In all, we have eliminated for the most part - the memory work in moving the parameters, in creating a new stack frame, and the branches caused by the instruction pointer jumps.

So what's the problem? The issue is that code is no different than data in many ways. It needs to be loaded onto the CPU for processing and execution. Inline compilation will of course cause code bloat, which will then cause more memory fetches (and cache misses) when executing the code. In performance critical code this can lead to execution slowdowns that are hard to pin down. One way of controlling this is to keep better control over what functions are inlined and which ones are not inlined. Similar to the compiler hint to inline a function, it is possible to mark functions not to be inlined.

For example, let's say you have a function that has critical performance requirements. In looking over the implementation of the function, you notice that it has two possible execution paths. However, it is noted that 95% of the time the calling functions are always in only one of the paths (possibly the second path is for initialization, over flow or other error handling). The problem is that 80% of the code is in the unused second execution path. By splitting the function into two functions - one for the primary majority case and the second containing the bulk of the code (but rarely executed), it is possible to greatly reduce the execution size of the function (increasing instruction page performance) with minimal cost.

Take Away:

  1. Function inline can produce a performance increase by flattening the code compilation at the cost of code bloat.
  2. There may be an advantage to extracting out large (relative) amounts of code inside of an inline function that has only a rare execution probability and placing it into a non-inline tagged secondary function to reduce the code bloat.

Tim McGraw - Southern Voice



08/16/10: Pass In Register

Well, apparently its been over three years since I last made a post. I was originally planning on trying to post something every week, but I am really bad when it comes to any type of regular correspondence. We'll see how long I manage to keep it going this time, eh :)

So, as for the title - pass in register. This is an interesting thing that in some cases can be a very good performance gain but needs to be balanced against the number of available registers. On the PPC platforms (consoles) we have a ton of registers and so passing things by register is something that can be done regularly without too much forethought. However, on the PC the number of available registers is much more limited and care should be taken when using this execution path. Keep in mind as well that the optimizer for the PC platforms can often do a better job since most functions that use pass-in-register semantics are most likely inlined as well. Be careful trying to be smarter than the optimizer (even if you think - like me - that most optimizers are only slightly better than a five-year old when it comes to manipulating code).

If you have read this far, there is a good chance you are shaking your head - pass in register on the PC? When using vector (SIMD variables) most people believe that it is not possible to actually do this on the PC. As it turns out, it is possible - just a little annoying. Assuming you are using the Microsoft compiler - pass in register is done by using the native type. Be warned, type defs are not equivalent in this case. The common method for creating a cross-platform math library is to either use a type definition to cast the native type to a common name or, and the more common case, to use a structure to contain the native type (usually within a union for element access). As it turns out, there is way to convince the compiler (to my knowledge) to pass any of these in register (either the type definition or the container structure). As a matter of fact, the only way that I have found to work is to use the actual __m128 type as the variable type in the parameter. Documentation also warns that this will work only for the first three parameters. If you try to do any more than three parameters using this method, you will get the standard compiler error about being unable to guarantee variable-stack alignment requirements.

Take away:

  1. It is possible to pass in register on the PC platform.
  2. Using this passing method can provide a performance boost by preventing a store-load on the stack.
  3. Needs to be balanced against the usage case and the number of available registers.

Brad Paisley - American Saturday Night



01/02/07: PPC Compiler

I was quite proud about the way I had designed my math and collision code base using templates so that it allowed for easy flexibility between float and double computations. With the native 64bit nature of the new PPC chips this could be a very strong asset for collisions that require extra precision (quadratic surfaces for instance). Then I find out that my good friend, Mr. Compiler, insists on doing a heap shuffle on each and every parameter for which a 1:1 mapping between variable and register type does not exist. For instance the compiler will not move a vector through on 4 float registers or a matrix through on 3/4 vector registers. It will insist on doing a heap shuffle - even when inlining the code (don't ask me - I'm just saying what I see on release-optimized asm output). This is enough for me to want to commit serious bodily harm on someone - the speed loss is ridiculous (for instance some hand tweaking of one loop in the code base changed my frame rate from 2FPS to 55FPS). There are times you just want to take the compiler out back for a few rounds, eh -) So as it stands the only way to get the needed efficiency would be to use #define network of math functions - since this would allow for the automatic transfer of matrix as vectors. What a pain in the ass, eh -( Anyways - going to play with it a bit more and see - but as far as I know this was never solved for the PS2 compiler either so I don't have high hopes.



12/15/06: Xbox360 GPU functions

I have been spending some time working on the Xbox360 recently working how best to use the L2 locking functionality of the hardware and the specific GPU function call in the API. Essentially they allow for greater separation between CPU and GPU execution, minimizing the number of synchronization points. This has required a rewrite of how video constants are stored and manipulated in general, keeping in mind the 64 byte alignment that is required for data transfer from the CPU to the GPU. Over all its been interesting.

Someone emailed me recently pointing out that my html parser mangled the code drop online - dropping any code after a division symbol ( the parser was interpreting it as a failed comment ). This has been fixed and so the code base should be more reasonable now. If anyone see's any other problems, please email me!

Implemented a basic input library through XInput. Bought myself a 360 controller for windows so that I would never have to revisit Direct Input ever again. Anyone who has ever had to create a robust and thorough solution using it will understand - its a nightmare. I understand why it was designed the way it was - a PC can have any type of input device - but from a game point of view it could drive you nuts. XInput is just a slam-bam-thank-you-ma'am in-and-out affair - its wonderful.



12/08/06: Vectorization of a Physics Solver

Been spending the last few days taking a standard physics solver setup and solution and vectorizing the resulting operations. Its been a lot of fun and will make porting it/working with it on a SPU much easier. I have also been trying to isolate small tasks to get a good to do list going for the holiday break. Finally, been working out the last remaining issues on the X360 build - which is now up and running. I threw in basic controller, audio and XMV support since its literally only takes six lines of code on the X360. Its amazing how easy the SDK for that platform is to use. One day when I feel extremely masochistic and in the need of a good sledge hammer to the brain, I'll work on the PS3 port. It is possible hell will freeze over first. Hard to say. I did manage to survive multiple PS2 titles, so it aint all bad -)



08/16/10: Parameter Passing

Reworked the entire code base so that parameters are declared using a specific syntax so as to let me toggle whether certain types (specifically vectors and matrices) are passed by value or by reference. This is so that on the console environments I can pass things by value but keep passing them by reference on the PC platform. Fun... not!

Tried installing Vista x64 since its got so much bling it must be a great platform to work on, right? I mean you have to use it if you want to play around with DX10. I'm sure people looking at the puss infected plague victims had the same thoughts, because really it was almost that bad. I couldn't even get the system to run the x64 version of the software. What a waste of time. Guess I'll give it the standard two year wait before even thinking of moving to the new MS OS since that is how long it seems to take them to get a decent development platform made that works on them. Bleh!

Concentrating on the physics platform for now and continuing to write constraint software. One thing left to do before moving onto that which is finalizing a working compilation again - had to change the return values for many functions, to choose like the parameters, to pass return values by value or by constant reference. Hopefully that will not take long. My development box is back to Win XP x64 so things should go smoothly tonight.



11/29/06: Stick a Fork in It

Whew. Finally done with LibXML2. I am 100% that I will have to revisit it in the future - but for now it compiles inside of its own namespace - correctly handles 32, 32/64, 64/64 and 64/32 native integer to address size issues. (ie. compiles on x86, x64 and PPC). I was a little worried about integrating in the Collada code after my experience with LibXML2 but it went in smooth as butter. It really highlighted the difference between the code bases - one professional and the other open source enthusiast.

I finally was able to start working on the main physics engine. The first 50% of most of the common constraints have been implemented and about 90% of the solver mechanism. What is left is to complete the constraints and to design and implement a contact graph. Should be interesting. However, once again I was side tracked as I have been working on rewriting much of the math library. As it turns out on the PPC systems, because of the large number of registers on the chip, it tends to send function parameters by register. However, if the function is not inline, and uses references it forces the system to fetch the values from memory (ack!). So, I am rewriting all the routines to pass most parameters by value. At the same time I'm expanding the vector operation library to cover more of the PPC routines that are available that were not strictly implemented on XMM. I need them to be able to restrict the solver unit to a pure vectorized system.



11/09/06: LibXML2 - The Continuing Saga

Information Overloading:

Just do not do it - ever!

I have continued to work on the LibXML2 integration. It has taken me longer than expected specifically because I keep waffling on how and if I want to integrate this particularly library. Unfortunately, it seems to be consistently updated with bug fixes so that a heavily modified integration would make re-integration of fixes a particularly annoying task. However, as it stands the code base is a slip-shod combination of implementation confusion, and a complete disregard for possible platform issues. The first time that I saw a pointer cast in this manner [(int)(long)(pointer)] I knew that people were just not quite grasping the whole issue about address casting. Simply put they were putting their faith on the fact that the ""long"" type would match the address space - or in other words - someone slammed in the long cast to avoid a compilation warning when compiling on 64bit CPUs without actually thinking about it. The subsequent int cast is just the nail in the coffin.

So let me put it out there right now - CPUs can have different address space size than their intrinsic integer size. For instances we have commodity CPUs of 32bit, 32bit with 64bit addressing, and 64 bit (ie. P4, P4 with 64EMT, and PPC64). Depending on the fact that either int or long matches your address size, is completely invalid. Just as there is a size_t standard definition, there is a intptr_t definition. intptr_t is defined as the integer value that matches the pointer (address space) size.

After dealing with that headache, I had to deal with the fact that information overloading is used throughout much of this engine. Use of the integer type (ie. signed) value to represent data storage size so that its possible to use the single -1 value as an error flag is technically improper. From a pure implementation point of view, if your xml length exceeds 31bits, you've got issues. However, in terms of a library that is meant to be reused by others – it is better to obey the letter of the law, and not just the spirit of the law. I had the same debate when modifying the code base, and decided to obey the letter of the law and changed the size descriptors to be unsigned values - this is causing a chain of rewriting since the error flag becomes size:max. I find this a much better decision since that leaves the entire (address space - 1) as valid values - while only retaining the max value as illegal. This is reasonable since if you're trying allocate the entire address space - well, that is just bad, eh .

So in all integrating this library has forced me to go in and re-architect many of the overall implementation decisions made through out the engine. On the up-side the internationalization tools will be useful later for localization issues. This has consumed a lot of my time and it will continue to do so for at least another week (I am guessing).



11/03/06: LibXML2

Finally got the majority of the source for LibXML2 compiling - now I'm working on getting it into its own namespace and tying it into the existing engine infrastructure. I had to remove a bunch of things from the source - things like FILE access since that is a haphazard thing on consoles in any case - I also removed all the network code since that was just something I did not see as particularly useful for me right now. Its been a project of repeatedly pounding my head against a wall - but its definitely taking form now. Hopefully I will be able to stick a damn fork into it soon and move onto the Collada support. The major goal is to be able to load and render all of the Collada samples files within as short a period of time as possible. Hopefully, without the use of too much Vodka!



11/02/06: Open Source Software

While I am not quite an anti-evangelist for open source software, I find that whenever I look in that direction for possible solution the source itself is an illegible mess of incoherent and often badly verified code. My current project is to get a working XML processor into the main build so that I can use an XML based system for the data files, and I can then hook it into the collada system for processing of those files. I opted to go with LibXML2 since it was rated very high in terms of development and standards compliance. I did not realize at the time that making that decision was akin to deciding to go into an iron maiden because it looked like it was engineered very well using very sharp and good quality steel. Oops, my mistake! The code base has slowly been driving me nuts, having an almost haphazard approach to pointer arithmetic and variable size definitions. The common belief that sizeof(void*) == sizeof(int) is enough to drive someone trying to get code working on x64 architecture around the bend. This little side project is definitely gonna take me a couple more days to complete, so until then - adieu



10/31/06: Monday Curse

How are you realistically supposed to have the energy or the desire to get anything done on a Monday evening? Well that was my problem last night - I sat around trying to ""decide"" what to do next - read: procrastinated doing any work. Did a few compiles and fixes for the PPC build of the system - and then moved on to integrating in some external libraries into the solution so that I could watch Heroes on the other monitor. Had to get them in eventually anyway - I needed both a basic text as well as a platform specific binary load and save process. For the text (or generic) data file I decided to go with Collada. That way I would not have to reinvent the wheel, and more importantly I would never have to delve back into the ""pile"" that is the 3D Studio Max SDK. Honestly, a few rounds with razor blades would probably be more pleasant that having to use that thing ever again in my life.

Got zlib in the solution and working. Next was libxml2 which is taking a little longer. Got most of the include path issues solved, but still have to either fix the configuration or get iConv in the solution and working. I have not decided - though I am leaning more toward iConv right now. I can see how it could be useful later for UI systems. I plan to support either UTF8 or UTF32 text streams for text output, so iConv may be useful outside of libxml2. Once its up and running I can move on to getting the Collada DOM and Implementation files up and running.



0/30/06: TGS Development

I have spent a lot of time trying to figure out how to write a blog. I mean really, how can someone write something daily or even weekly that is remotely interesting or relevant? However, somewhere in my head I knew that I should be doing something with this space - something useful. It finally hit me what would be both useful and a way to actual write this in a relevant manner - I will document the ongoing changes and work being done on my home project (TGS), features and consideration. It may one day, by some person, be useful when they start puttering around with engine code -)

My current trajectory for code development is to get a basic simulation loop up and running. I spent the last 6-8 months tweaking simulation code. This code base is primarily meant for floating point operation - a couple routines were made to be used on vectorized architectures - but I will have to revisit the collision code once I have isolated what tests and operations I need for vectorized architectures. However, to get a simulation loop up and running in a useful manner I need to be able to draw it, eh -) So I've spent the last couple of weeks on a rendering engine, based on the design/architecture I originally layed out a few years ago. Amazingly - its still valid (to both my shock and surprise!).

In working with physics code for the last three/four years I have found that visualization of the data is extremely important. The amount of data a simulation both consumes and emits is very often much more than can be easily or reasonably consumed by simple examination. Since it is time based, inserting in code that changes the timing of the loop can also cause a difference in a bugs manifestation. Thus, proper visualization tools are very important - and they have to prevent any non-trivial change in engine execution time. To this end I implemented a basic geometry draw call in the render engine (this allows someone to simple call for a draw(sphere) type of thing). Since physics code most often uses basic primitive forms other than meshes (or convex mesh) this allows for the ability to use certain optimization techniques that might otherwise not be available. Specifically we know that we will be re-rendering a very small and select group of vertex buffers. So I created a simple test where I crawled the current render engine (5 FPS, caused by a 5-way tessellated sphere being drawn 1024 times in a fixed grid pattern). This was the main task during the week.

The weekend task was to get a geometry instancing approach implemented for these debug render functions. I went with a hardware (shader 3.0) approach where the vertex stream for the primitive is assigned a frequency of the number of primitives to be drawn, and a instance data stream containing a colour and a model to world transformation ( as a 3x4 matrix). The primitive stream is a managed data stream and the instance data is a dynamic stream that is kept locked whenever its not specifically rendering. A max limit is stored as an enumeration in the class, and if the number of calls exceed this limit - the render call is immediately pushed out and the instance data is reset. Put more simply:

Instance Draw Call -> Store data in instance array -> Draw all instances

Not unsurprising, this did not change my resulting frame rate very much. I say not unsurprising because given the lack of any other processing occurring on the system - I am most definitely not CPU bound - and most of my processing time would be spent on pixel bound issues since textures are for the most part not being used. Vertex processing time between instanced and non-instanced call would not be that different other than perhaps a slightly higher cost due to the instance data stream. However, in a fully working environment, I definitely see this new method as a win. The previous method required sending the colour and the model to world matrix to the card for each render call - that is a lot of SetVertexShaderConstantF calls. So the instancing method eliminate the ShaderConstant calls and drops the number of DIPs - thus, reducing the number of possible CPU-GPU synchronization points. I am hoping this will make a big difference when running in a real world application. Future testing will tell me if I'm right -)



09/06/06: Texttbooks and Concentration

It did not occur to me until recently that textbook reading is really a skill that has to be trained to be useable. Its been a couple of years now since I've read anything longer or bigger academically than a journal or transaction paper, and now I'm sitting down to read a couple books on Voronoi Diagrams and Computational Geometry and its hurting the old brain - well at least the eye muscles. Its amazing how fast we get used to increasing short focal periods. I have spent the last few years working on multiple computers with multiple screens. I program while having a movie playing on an opposing screen. We laud our ability to multi-task and fail to realize that what we're really saying is that we are decreasing our ability to focus on one particular task for long periods of time. I used to sit down with a text book and read it all day, stopping only for lunch. A couple months ago I found myself having to concentrate to remain focused on the material and the book after an hour. I am back to reading form now - but it made me wonder how much the skills we think are positive indicators of ability in computer programming are in reality a decrease in real functionality. Bleah!



08/29/06: Firefox and Writing-Mode

So I'm starting to think I that having a blog means writing something more than once a month or in this case more than once every 3 months. Been busy working on NWN2 so many things outside of work have kinda slid into obscurity over the last few months. I am beginning to get the web page ready for a more public and broad spectrum of people to access and found out that many of my code pages did not render at all on Firefox. I realize that text formatting is a complicated matter - I spent many years in front of DTPs (Ventura and then later PageMaker) not to know. However, the way that each browser seem to flaunt any type of conformance to any standard is enough to drive people up a wall. I am now 100% certain that browsers are made by web designers who believe in job security through obscurity. Seriously, if any language with the complete non-standard compliance that html/css has came out as a professional product for general purpose programming, it would be laughed out of the market and dropped into the dust bins of history. The one saving grace was PHP where I could create a ridiculous state list to take into account the vagrancies of the different browsers. What a pain!

Anyways - so with that done the pages should be showing in Firefox now - downloaded it just to test it and it seems to be working out fine. Testing in IE 7.0 as well, just to make sure.

Starting to think about the future, and what I want to do with my little project. The XNA announcement from Microsoft for casual developers to make X360 games has me thinking about distributing the physics/collision/scenegraph part of the engine for people to play with for people making free games. Eventually, I may put together a toolset and graphics core but that is probably much more long term.

If you have any questions about the collision code (http://www.andrewaye.com/tgs_collision.html) feel free to send me an email or just talkback on the blog. I will be adding explanations and commentaries on request.



04/14/06: MMORPGs R.I.P.

This will be a talk of days long gone, a stroll down memory lane - and a simple question: can making a better game, destroy the gaming experience? It is also about the communities that are generated by and through MMORPGs. I was a PvPing rabid attack fiend in Ultima Online, a mercurial and raiding fanatic in Everquest and a solo-player in WoW. I did not really spend enough time in the other MMORPGs that I have played to establish much of an online identity. I can say with certainty that years of my life have been spent online, playing these games. Slash played in EQ was just so wrong - people should not be able to see that stuff! I remember old guild buds who were significantly over 600 days played, and when I retired was around 250 on just one of my characters.

There is no question that the first time you commit to one of these games carries with it the feeling of both a new and fresh experience - but that is not what keeps people playing. It is a combination of the social and the competitive. I have known people who live and die by their eqrankings standing, and others who religiously followed the guild pages to always have the best gear, or so they could claim to be the first in beating a game event. That is, of course, the competitive spirit of the game. The social aspect comes from the enforced intimacy and teamwork that has to happen in groups and in raiding. However, in the desire to make a better game - many of the new MMORPGs coming out onto the market are striving to make the experience more interactive, with the need of increasing user input. This has been heralded as a way to differentiate the skilled from the newb, and as a way to make the game more fun. However, it also detracts from the ability to interact with your fellow players when in a group, and simply adds another forum for the competitive drive to compete. Too much attention has been spent on the need to be competitive and a lack on the social aspects of the MMORPG.

Detractors of the genre will talk about long camps for rare items and about non-interactive and boring game play. However, if you think about the stories that are most recounted when talking about previous gaming experiences, it is often the case that it is in regards to long camps and rare items. It is the very issues that people think and talk about as problems that generate the shared experiences that forms the bonds of a social group. I even read a comment made by a leader in the gaming community that they felt it would be impossible to match EQ status unless it was possible to deride the most recent film flop while in the middle of combat during an XP camp and grind. It is this duality - where better game play and mechanics, removes or reduces the almost physical force that caused people to interact and talk because of the complete lack of game interaction that made EQ such a strong and pervasive force.

World of Warcraft has certainly beaten every Everquest record ever made - the number of people playing the game has certainly exceeded even Blizzard's expectations. But in many ways, I feel that it is still inferior to EQ. I played the game for over three months and never felt the same sense of community as I did in EQ. In many ways people actually avoided talking to each other, and chat channels were minimized or removed from the interface. There was next to nothing in terms of grouping, and only at the very high levels did people seriously commit to it. There was no sense of belonging to a gaming community. The game focused on only one driving aspect of the MMORPG - the drive and need to compete. This may be partly due to the PvP nature of the game - but I think it was primarily a design decision - they removed the social aspects of the game to focus on and increase the competitive parts. In so doing, they created a very sterile world



04/08/06: Carmack vs Physics - First Round

Talking some more on the subject of physics and games - I wanted to shoot out some things based on the Quakecon 2005 keynote by John Carmack.

But I do think it's a mistake for people to try and go overboard and try and do a real simulation of the world because it's a really hard problem, and you're not going to give that much real benefit to the actual game play. You'l tend to make a game which may be fragile, may be slow, and you'd better have done some really, really neat things with your physics to make it worth all of that pain and suffering.

The above is one of the things that Mr. Carmack said in reference to physics during his keynote. When I read this I was amused by the fact, that taken out of context, this is the same type of thing someone could have said a couple decades ago about computer graphics. It has been a common error recently to confuse graphic fidelity with game play. At its core, graphics has the same job as physics for a game - audience immersion. Game play has nothing to do with graphics - one of the things that Nintendo has been trying to show the industry recently. Game play is about game design, about the actual process of playing the game, the rules and strictures by which the environment has been established.& Graphics has been about immersing the player in the environment, in creating the suspension of disbelief necessary for the audience to feel a part of the game rather than a spectator of the game. In much the same way physics helps and reinforces these same precepts. As an industry we've stagnated over this issue of graphics as game play, and we need to get past it. Neither graphics, nor physics supply game play. However, game play can use either or both!

Doom and Doom III is nearly the same game - they have the same style, mechanics and only minor tactical changes (principally discussing the single player game). There have been no principal changes in game play from one to the other. However, Mr. Carmack would like us to believe that this change in graphics is a game play improvement while increasing the use of physics would not provide the same thing. This attitude is what I've never really understood. Real fluid simulations, deformable and destructible environments, the use of the environment in combat and physically reactive characters are just a few of the things that would both add game play and immersion. When your character in Doom can be, for all intents and purposes, Jackie Chan, then we can discuss the decreasing uses for physics in improving game play.

As I've mentioned previously, the major change that the use of physics in a game presents is the environment changing from a controlled, state-machine to being fully dynamic. This change has rattled and scared many developers, who have restricted the use of physics to purely cosmetic uses or to very controlled situations. However, I feel there is a great amount of potential in opening the gaming mechanic - allowing for more non-linear and process-orientated gaming systems. Designers would be able to setup situations that could be activated through environmental analysis, or even be changed on the fly based on situational modifiers presented by the environment.



04/07/06: Physics in Computer Games

Physics has been a hot topic in the game's industry for the last couple of years. I personally think that both collision and physics will be a very important development in audience immersion, bringing games closer to a virtual environment than in previous generations. This subject has definitely encouraged a very wide range of opposing views - however, it's the question of the usefulness of physics in a game that I want to talk about for a few words. Without question, making a game include physical components breaks a firm tradition of the game industry where the level designer had complete control of the gaming environment at all times. Physics essentially introduces a certain amount of chaos into the system that has to be taken into account during the game and level design. While it is more than possible to include physical puzzles or requirements into a game, the argument that these same events could be scripted (in some complex fashion) is correct. It is my opinion that physics is not a technology that will provide game designers with new gaming tools, but rather it's a way to help create the suspension of disbelief necessary to lull an audience into the narrative the game is trying to express. Just like textures on graphic output as opposed to flat shading does not actually provide a new game mechanic, it does help increase the level of immersion during play.