18 Aug 2018

feedDjango community aggregator: Community blog posts

Inline building in upcoming Evennia 0.8


Evennia, the Python MUD-server game development kit, is slowly creeping closer to its 0.8 release.

In our development branch I've just pushed the first version of the new OLC (OnLine Creator) system. This is a system to allow builders (who may have limited coding knowledge) to customize and spawn new in-game objects more easily without code access. It's started with the olc command in-game. This is a visual system for manipulating Evennia Prototypes.


Briefly on Prototypes

The Prototype is an Evennia concept that has been around a good while. The prototype is a Python dictionary that holds specific keys with values representing properties on a game object. Here's an example of a simple prototype:

{"key": "My house",

"typeclass": "typeclasses.houses.MyHouse"}


By passing this dict to the spawner, a new object named "My house" will be created. It will be set up with the given typeclass (a 'typeclass' is, in Evennia lingo, a Python class with a database backend). A prototype can specify all aspects of an in-game object - its attributes (like description and other game-specific properties), tags, aliases, location and so on. Prototypes also support inheritance - so you can expand on an existing template without having to add everything fresh every time.

There are two main reasons for the Prototypes existing in Evennia:

What's new

As said, Prototypes have been around for a good while in Evennia. But in the past they were either manually entered directly as a dict on the command line, or created in code and read from a Python module. The former solution is cumbersome and requires that you know how to build a proper-syntax Python dictionary. The latter requires server code access, making them less useful to builders than they could be.

 --- Prototype wizard --- A prototype is a 'template' for spawning an in-game entity. A field of the prototype can either be hard-coded, left empty or scripted using $protfuncs - for example to randomize the value every time a new entity is spawned. The fields whose names start with 'Prototype-' are not fields on the object itself but are used for prototype-inheritance, or when saving and loading. Select prototype field to edit. If you are unsure, start from [1]. Enter [h]elp at any menu node for more info. ______________________________________________________________________________________________________________________________________________ Validate prototype | SAve prototype | SPawn prototype | LOad prototype | SEarch objects | Quit 1: Prototype-Key (required) 9: Permissions 2: Prototype-Parent (required) 10: Location 3: Typeclass (required) 11: Home 4: Key 12: Destination 5: Aliases 13: Prototype-Desc 6: Attrs 14: Prototype-Tags 7: Tags 15: Prototype-Locks 8: Locks


In Evennia 0.8, while you can still insert the Prototype as a raw dict, spawn/menu or the new olc command opens a new menu-driven interface.

Select a prototype to load. This will replace any prototype currently being edited! ___________________________________________________________________________________________________ Select with <num>. Other actions: examine <num> | delete <num> Back (index) | Validate prototype | Quit 1: goblin_archer 5: goblin_archwizard 2: goblin_wizard 3: goblin 4: archwizard_mixin


More importantly, builders can now create, save and load prototypes in the database for themselves and other builders to use. The prototypes can be tagged and searched as a joint resource. Builders can also lock prototypes if others are not to be able to read or use them to spawn things. Developers can still supply module-based "read-only" prototypes (for use as starting points or examples to their Builders, for example).

Found 1 match. (Warning: creating a prototype will overwrite the current prototype!) ____________________________________________________________________________________ Actions: examine <num> | create prototype from object <num> Back (index) | Quit 1: Griatch(#1)


You can now also use the menu to search for and create a new Prototype based on an existing object (if you have access to do so). This makes it quick to start up a new prototype and tweak it for spawning other similar objects. Of course you could spawn temporary objects without saving the prototype as well.

The Typeclass defines what 'type' of object this is - the actual working code to use. All spawned objects must have a typeclass. If not given here, the typeclass must be set in one of the prototype's parents. [No typeclass set] ______________________________________________________________________________________________________________________________________________ Back (prototype-parent) | Forward (key) | Index | Validate prototype | Quit 1: evennia.contrib.tutorial_world.mob.Mob 7: evennia.contrib.tutorial_world.objects.TutorialObject 2: evennia.contrib.tutorial_world.objects.Climbable 8: evennia.contrib.tutorial_world.objects.Weapon 3: evennia.contrib.tutorial_world.objects.CrumblingWall 9: evennia.contrib.tutorial_world.objects.WeaponRack 4: evennia.contrib.tutorial_world.objects.LightSource 10: evennia.contrib.tutorial_world.rooms.BridgeRoom 5: evennia.contrib.tutorial_world.objects.Obelisk current: (1/3) 6: evennia.contrib.tutorial_world.objects.Readable next page


Builders will likely not know which typeclasses are available in the code base. There are new a few ways to list them. The menu display makes use of Evennia 0.8's new EvMenu improvements, which allows for automatically creating multi-page listings (see example above).

There is also a new switch to the typeclass command, /list, that will list all available typeclasses outside of the OLC.

Protfuncs

Another new feature are Protfuncs. Similarly to how Inlinefuncs allows for calling for the result of a function call inside a text string, Protfuncs allows for calling functions inside a prototype's values. It's given on the form $funcname(arguments), where arguments could themselves contain one or more nested Protfuncs.

As with other such systems in Evennia, only Python functions in a specific module or modules (given by settings) are available for use as Protfuncs in-game. A bunch of default ones are included out of the box. Protfuncs are called at the time of spawning. So for example, you could set the Attribute

Strength = $randint(5, 20)

to automatically spawn objects with a random strength between 5 and 20.

prototype-key: goblin, -tags: [], -locks: spawn:all();edit:all() -desc: Built from goblin prototype-parent: None key: goblin aliases: monster, mob attrs: desc = You see nothing special. strength = $randint(5,20) agility = $random(6,20) magic = 0 tags: mob (category: None) locks: call:true();control:id(1) or perm(Admin);delete:id(1) or perm(Admin);edit:perm(Admin);examine:perm(Builder);get:all();puppet:pperm(Developer) ;tell:perm(Admin);view:all() location: #2 home: #2 No validation errors found. (but errors could still happen at spawn-time) ______________________________________________________________________________________________________________________________________________ Actions: examine <num> | remove <num> Back (index) | Validate prototype | Quit 1: Spawn in prototype's defined location (#2) 2: Spawn in Griatch's location (Limbo) 3: Spawn in Griatch's inventory 4: Update 2 existing objects with this prototype


When spawning, the olc will validate the prototype and run tests on any Protfunc used. For convenience you can override the spawn-location if any is hard-coded in the prototype.


The system will also allow you to try updating existing objects created from the same-named prototype earlier. It will sample the existing objects and calculate a 'diff' to apply. This is bit is still a bit iffy, with edge cases that still needs fixing.

Current status

The OLC is currently in the develop branch of Evennia - what will soon(ish) merge to become Evennia 0.8.

It's a pretty big piece of code and as such it's still a bit unstable and there are edge cases and display issues to fix. But it would be great with more people trying it out and reporting errors so the childhood issues can be ironed out before release!




Building Image: Released as Creative Commons here

18 Aug 2018 5:23pm GMT

15 Aug 2018

feedDjango community aggregator: Community blog posts

django-pipeline and Zopfli

tl;dr; I wrote my own extension to django-pipeline that uses Zopfli to create .gz files from static assets collected in Django. Here's the code.

Nginx and Gzip

What I wanted was to continue to use django-pipeline which does a great job of reading a settings.BUNDLES setting and generating things like /static/js/myapp.min.a206ec6bd8c7.js. It has configurable options to not just make those files but also generate /static/js/myapp.min.a206ec6bd8c7.js.gz which means that with gzip_static in Nginx, Nginx doesn't have to Gzip compress static files on-the-fly but can basically just read it from disk. Nginx doesn't care how the file got there but an immediate advantage of preparing the file on disk is that the compression can be higher (smaller .gz files). That means smaller responses to be sent to the client and less CPU work needed from Nginx. Your job is to set gzip_static on; in your Nginx config (per location) and make sure every compressable file exists on disk with the same name but with the .gz suffix.

In other words, when the client does GET https://example.com/static/foo.js Nginx quickly does a read on the file system to see if there exists a ROOT/static/foo.js.gz and if so, return that. If the files doesn't exist, and you have gzip on; in your config, Nginx will read the ROOT/static/foo.js into memory, compress it (usually with a lower compression level) and return that. Nginx takes care of figuring out whether to do this, at all, dynamically by reading the Accept-Encoding header from the request.

Zopfli

The best solution today to generate these .gz files is Zopfli. Zopfli is slower than good old regular gzip but the files get smaller. To manually compress a file you can install the zopfli executable (e.g. brew install zopfli or apt install zopfli) and then run zopfli $ROOT/static/foo.js which creates a $ROOT/static/foo.js.gz file.

So your task is to build some pipelining code that generates .gz version of every static file your Django server creates.
At first I tried django-static-compress which has an extension to regular Django staticfiles storage. The default staticfiles storage is django.contrib.staticfiles.storage.StaticFilesStorage and that's what django-static-compress extends.

But I wanted more. I wanted all the good bits from django-pipeline (minification, hashes in filenames, concatenation, etc.) Also, in django-static-compress you can't control the parameters to zopfli such as the number of iterations. And with django-static-compress you have to install Brotli which I can't use because I don't want to compile my own Nginx.

Solution

So I wrote my own little mashup. I took some ideas from how django-pipeline does regular gzip compression as a post-process step. And in my case, I never want to bother with any of the other files that are put into the settings.STATIC_ROOT directory from the collectstatic command.

Here's my implementation: peterbecom.storage.ZopfliPipelineCachedStorage. Check it out. It's very tailored to my personal preferences and usecase but it works great. To use it, I have this in my settings.py: STATICFILES_STORAGE = "peterbecom.storage.ZopfliPipelineCachedStorage"

I know what you're thinking

Why not try to get this into django-pipeline or into django-compress-static. The answer is frankly laziness. Hopefully someone else can pick up this task. I have fewer and fewer projects where I use Django to handle static files. These days most of my projects are single-page-apps that are 100% static and using Django for XHR requests to get the data.

15 Aug 2018 9:04pm GMT

14 Aug 2018

feedDjango community aggregator: Community blog posts

Django lock decorator with django-redis

Here's the code. It's quick-n-dirty but it works wonderfully:

import functools
import hashlib

from django.core.cache import cache
from django.utils.encoding import force_bytes


def lock_decorator(key_maker=None):
    """
    When you want to lock a function from more than 1 call at a time.
    """

    def decorator(func):
        @functools.wraps(func)
        def inner(*args, **kwargs):
            if key_maker:
                key = key_maker(*args, **kwargs)
            else:
                key = str(args) + str(kwargs)
            lock_key = hashlib.md5(force_bytes(key)).hexdigest()
            with cache.lock(lock_key):
                return func(*args, **kwargs)

        return inner

    return decorator

How To Use It

This has saved my bacon more than once. I use it on functions that really need to be made synchronous. For example, suppose you have a function like this:

def fetch_remote_thing(name):
    try:
        return Thing.objects.get(name=name).result
    except Thing.DoesNotExist:
        # Need to go out and fetch this
        result = some_internet_fetching(name)  # Assume this is sloooow
        Thing.objects.create(name=name, result=result)
        return result

That function is quite dangerous because if executed by two concurrent web requests for example, they will trigger
two "identical" calls to some_internet_fetching and if the database didn't have the name already, it will most likely trigger two calls to Thing.objects.create(name=name, ...) which could lead to integrity errors or if it doesn't the whole function breaks down because it assumes that there is only 1 or 0 of these Thing records.

Easy to solve, just add the lock_decorator:

@lock_decorator()
def fetch_remote_thing(name):
    try:
        return Thing.objects.get(name=name).result
    except Thing.DoesNotExist:
        # Need to go out and fetch this
        result = some_internet_fetching(name)  # Assume this is sloooow
        Thing.objects.create(name=name, result=result)
        return result

Now, thanks to Redis distributed locks, the function is always allowed to finish before it starts another one. All the hairy locking (in particular, the waiting) is implemented deep down in Redis which is rock solid.

Bonus Usage

Another use that has also saved my bacon is functions that aren't necessarily called with the same input argument but each call is so resource intensive that you want to make sure it only does one of these at a time. Suppose you have a Django view function that does some resource intensive work and you want to stagger the calls so that it only runs it one at a time. Like this for example:

def api_stats_calculations(request, part):
    if part == 'users-per-month':
        data = _calculate_users_per_month()  # expensive
    elif part == 'pageviews-per-week':
        data = _calculate_pageviews_per_week()  # intensive
    elif part == 'downloads-per-day':
        data = _calculate_download_per_day()  # slow
    elif you == 'get' and the == 'idea':
        ...

    return http.JsonResponse({'data': data})

If you just put @lock_decorator() on this Django view function, and you have some (almost) concurrent calls to this function, for example from a uWSGI server running with threads and multiple processes, then it will not synchronize the calls.

The solution to this is to write your own function for generating the lock key, like this for example:

@lock_decorator(
    key_maker=lamnbda request, part: 'api_stats_calculations'
)
def api_stats_calculations(request, part):
    if part == 'users-per-month':
        data = _calculate_users_per_month()  # expensive
    elif part == 'pageviews-per-week':
        data = _calculate_pageviews_per_week()  # intensive
    elif part == 'downloads-per-day':
        data = _calculate_download_per_day()  # slow
    elif you == 'get' and the == 'idea':
        ...

    return http.JsonResponse({'data': data})

Now it works.

How Time-Expensive Is It?

Perhaps you worry that 99% of your calls to the function don't have the problem of calling the function concurrently. How much is this overhead of this lock costing you? I wondered that too and set up a simple stress test where I wrote a really simple Django view function. It looked something like this:

@lock_decorator(key_maker=lambda request: 'samekey')
def sample_view_function(request):
    return http.HttpResponse('Ok\n')

I started a Django server with uWSGI with multiple processors and threads enabled. Then I bombarded this function with a simple concurrent stress test and observed the requests per minute. The cost was extremely tiny and almost negligable (compared to not using the lock decorator). Granted, in this test I used Redis on redis://localhost:6379/0 but generally the conclusion was that the call is extremely fast and not something to worry too much about. But your mileage may vary so do your own experiments for your context.

What's Needed

You need to use django-redis as your Django cache backend. I've blogged before about using django-redis, for example Fastest cache backend possible for Django and Fastest Redis configuration for Django.

14 Aug 2018 7:08pm GMT