25 Jun 2019

feedDjango community aggregator: Community blog posts

Series A Funding: Why Outsourcing Should Be Part of Your Startup Growth Strategy

You've raised your seed funding round and are on the path to Series A - Your startup shows promise! Investors have taken note. That's fantastic, and we're excited for you. But we also want to make it clear: Nowadays, to be a successful startup, you need to get right back to thinking about how you can spend that seed funding as wisely as possible. And you need to start thinking ASAP about how you'll ensure your startup is deemed worthy of any additional funding. Why the super-practical downer advice? While more startups than ever are getting seed money, fewer than ever are getting follow-on funding. So that Series A funding you're hoping for? It's not a given. Also, as accelerator Y Combinator always advises its startups, it's not your money. Getting more money is dependent on showing you can be a smart spender with good margins and a sustainable business model. Let me explain. What's the "Series A Crunch"? Analyzing Seed vs. Series A Funding The venture capital community coined the term "Series A Crunch" to describe a trend they saw in Series A funding. Essentially, they noted that, while huge numbers of startups were easily raising large sums in their seed rounds, much fewer startups were moving on to have successful Series A rounds. In the big picture, that means: More startups are getting more seed funding. More startups are competing for Series A funding. PE and VC investors are becoming more selective, providing Series A funding to fewer startups. Increasingly, startups only receive Series A funding if PEs and VCs assess that they're low-risk, high-reward business models with demonstrated traction in a growing market. And therein lies the "crunch." What does this mean for a successful startup looking to create sustainable competitive advantage? Higher Expectations and Higher Stakes The expectations for seed-stage performance are now higher. Only some startups will meet expectations. For example, VCs and PEs increasingly expect that seeded startups should be generating revenue before receiving Series A funding. TechCrunch offers up a compelling analysis from Silicon Valley VC firm Wing. In 2010, only 15% of seed-stage companies that had raised Series A rounds were already making money. Compare that to 2019, when Wing's data shows that "82 percent of companies that raised Series A rounds from top investors last year are already making money off their customers." What does this mean? VCs and PEs are unlikely to offer Series A funding to startups that aren't showing sufficiently solid evidence of market traction. If a startup can't show the potential for significant growth in both revenues and customer base, that follow-on funding is unlikely to flow. The stakes are high, and the risk of failure is even higher. CB Insights found that "nearly 67% of startups stall at some point in the VC process and fail to exit or raise follow-on funding." What Are the Series A Business Risks for Startups? VCs and PEs aren't interested in funding risky investments. That's why smart startups are focused on reducing their business risk, and on identifying sources of competitive advantage. Risk impacts countless areas of any business. Below, we've covered the three most important business risks for startups looking to raise Series A funding. Business Model Risk: Growth Matters How's that business model looking? Long gone are the days when a sparkly new idea was sufficient to generate funding beyond the seed round. Nowadays, to make it to Series A, startups must have business models that demonstrate traction in several key areas, including: Revenue growth: As mentioned above, even early-stage startups are expected to start making money. The next step is showing that the business model can create sustainable revenue growth. Customer growth: Are your customers out there? Even more importantly, will they buy your product? And can your customer base continue to grow? Market share growth: Can your business model compete and win in your chosen market? Before VC and PE investors are willing to follow you to Series A, you must derisk your business model as thoroughly as possible in these areas. If investors don't see a strong, data-backed business plan that shows clear potential for growth, they'll mostly see… risk. What you can do: Reality-check your business plan, examining your assumptions. Can you test your assumptions? Are they sound? For example, can you demonstrate that your market is large enough to provide the type of growth investors seek? Fully assess barriers to entry. How does your product stack up against the competition? Is your product truly differentiated against competitor products? Build out your business plan with as much data as possible. If the economics don't make sense, investors are going to run the opposite direction. Technology Risk: Speed Matters Nowadays, technology development is part of a high percentage of startup business plans. That means VC and PE investors must consider the time, expertise, and resources required for design and development. After all, these variables all inform a startup's technology risk profile. When assessing startups' technology-related business risks, VCs and PEs are likely to focus on: Development scope and type. Does your startup need to build technology from scratch, or can it leverage technologies that are already available? What alternatives already exist in the market, and is your product differentiated in meaningful ways? What is the risk that you'll be unable to build the technology? Does your product rely on third-party services? What happens if those third-party services go down? Development speed. How quickly can development happen? How likely is it that development will take significantly longer than anticipated? Can you build and iterate quickly enough to deliver a reliable, functional product to customers? Development expertise. Do you have access to the expertise and skill sets you need to develop the desired technology? Development process. Do you have strong, proven, efficient processes in place? Compliance, security, and intellectual property risk. What legal or regulatory risks impact development? Do you own all of the code? After all, nothing changes the basic, underlying assumption that - […] The post Series A Funding: Why Outsourcing Should Be Part of Your Startup Growth Strategy appeared first on Distillery.

25 Jun 2019 9:49pm GMT

20 Jun 2019

feedDjango community aggregator: Community blog posts

How to Set Up a Centralized Log Server with rsyslog

For many years, we've been running an ELK (Elasticsearch, Logstash, Kibana) stack for centralized logging. We have a specific project that requires on-premise infrastructure, so sending logs off-site to a hosted solution was not an option. Over time, however, the maintenance requirements of this self-maintained ELK stack were staggering. Filebeat, for example, filled up all the disks on all the servers in a matter of hours, not once, but twice (and for different reasons) when it could not reach its Logstash/Elasticsearch endpoint. Metricbeat suffered from a similar issue: It used far too much disk space relative to the value provided in its Elasticsearch indices. And while provisioning a self-hosted ELK stack has gotten easier over the years, it's still a lengthy process, which requires extra care anytime an upgrade is needed. Are these problems solvable? Yes. But for our needs, a simpler solution was needed.

Enter rsyslog. rsyslog has been around since 2004. It's an alternative to syslog and syslog-ng. It's fast. And relative to an ELK stack, its RAM and CPU requirements are negligible.

This idea started as a proof-of-concept, and quickly turned into a production-ready centralized logging service. Our goals are as follows:

  1. Set up a single VM to serve as a centralized log aggregator. We want the simplest possible solution, so we're going to combine all logs for each environment into a single log file, relying on the source IP address, hostname, log facility, and tag in each log line to differentiate where logs are coming from. Then, we can use tail, grep, and other command-line tools to watch or search those files, like we might have through the Kibana web interface previously.
  2. On every other server in our cluster, we'll also use rsyslog to read and forward logs from the log files created by our application. In other words, we want an rsyslog configuration to mimic how Filebeat worked for us previously (or how the AWS CloudWatch Logs agent works, if you're using AWS).

Disclaimer: Throughout this post, we'll show you how to install and configure rsyslog manually, but you'll probably want to automate that with your configuration management tool of choice (Ansible, Salt, Chef, Puppet, etc.).

Log Aggregator Setup

On a central logging server, first install rsyslog and its relp module (for lossless log sending/receiving):

sudo apt install rsyslog rsyslog-relp

As of 2019, rsyslog is the default logger on current Debian and Ubuntu releases, but rsyslog-relp is not installed by default. We've included both for clarity.

Now, we need to create a minimal rsyslog configuration to receive logs and write them to one or more files. Let's create a file at /etc/rsyslog.d/00-log-aggregator.conf, with the following content:

module(load="imrelp")

ruleset(name="receive_from_12514") {
    action(type="omfile" file="/data/logs/production.log")
}

input(type="imrelp" port="12514" ruleset="receive_from_12514")

If needed, we can listen on one or more additional ports, and write those logs to a different file by appending new ruleset and input settings in our config file:

ruleset(name="receive_from_12515") {
    action(type="omfile" file="/data/logs/staging.log")
}

input(type="imrelp" port="12515" ruleset="receive_from_12515")

Rotating Logs

You'll probably want to rotate these logs from time to time as well. You can do that with a simple logrotate config. Create a new file /etc/logrotate.d/rsyslog_aggregator with the following content:

/data/logs/*.log {
  rotate 365
  daily
  compress
  missingok
  notifempty
  dateext
  dateformat .%Y-%m-%d
  dateyesterday
  postrotate
      /usr/lib/rsyslog/rsyslog-rotate
  endscript
}

This configuration will rotate log files daily, compressing older files, and rename the rotated files with the applicable date.

To see what this logrotate configuration will do (without actually doing anything, you can run it with the --debug option:

logrotate --debug /etc/logrotate.d/rsyslog_aggregator

To customize this configuration further, look at the logrotate man page (or type man logrotate on your UNIX-like operating system of choice).

Sending Logs to Our Central Server

We can also use rsyslog to send logs to our central server, with the help of the imfile module. First, we'll need the same packages installed on the server:

sudo apt install rsyslog rsyslog-relp

Create a file /etc/rsyslog.d/90-log-forwarder.conf with the following content:

# Poll each file every 2 seconds
module(load="imfile" PollingInterval="2")

# Create a ruleset to send logs to the right port for our environment
module(load="omrelp")
ruleset(name="send_to_remote") {
    action(type="omrelp" target="syslog" port="12514")  # production
}

# Send all files on this server to the same remote, tagged appropriately
input(
    type="imfile"
    File="/home/myapp/logs/myapp_django.log"
    Tag="myapp_django:"
    Facility="local7"
    Ruleset="send_to_remote"
)
input(
    type="imfile"
    File="/home/myapp/logs/myapp_celery.log"
    Tag="myapp_celery:"
    Facility="local7"
    Ruleset="send_to_remote"
)

Again, I listed a few example log files and tags here, but you may wish to create this file with a configuration management tool that allows you to templatize it (and create each input() in a Jinja2 {% for %} loop, for example).

Be sure to restart rsyslog (i.e., sudo service rsyslog restart) any time you change this configuration file, and inspect /var/log/syslog carefully for any errors reading and/or sending your log files.

Watching & Searching Logs

Since we've given up our fancy Kibana web interface, we need to search logs through the command line now. Thankfully, that's fairly easy with the help of tail, grep, and zgrep.

To watch logs come through as they happen, just type:

tail -f /data/logs/staging.log

You can also pipe that into grep, to narrow down the logs you're watching to a specific host or tag, for example:

tail -f /data/logs/staging.log | grep django_celery

If you want to search previous log entries from today, you can do that with grep, too:

grep myapp_django /data/logs/staging.log

If you want to search the logs for a few specific days, you can do that with zgrep:

zgrep myapp_celery /data/logs/staging.log.2019-05-{23,24,25}.gz

Of course, you could search all logs from all time with the same method, but that might take awhile:

zgrep myapp_django /data/logs/staging.log.*.gz

Conclusion

There are a myriad of ways to configure rsyslog (and centralized logging generally), often with little documentation about how best to do so. Hopefully this helps you consolidate logs with minimal resource overhead. Feel free to comment below with feedback, questions, or the results of your tests with this method.

20 Jun 2019 5:00pm GMT

All Is Turned to Black

I want one of these but faster

If the title worried you about my mental state, you can relax. It's about the Python code formatter Black!

Yesterday I finished converting the open source Python projects I maintain to use it.

I've followed the project from a distance for a while, and figured now is a my time to try it out. It's not out of beta yet, but it's stable and gaining traction:

Is the name some kind of "Black humo(u)r"?

The name confuses some people. It's a reference to Henry Ford, who quipped about the single colo(u)r option on his Model T:

Any customer can have a car painted any color that he wants so long as it is black.

Hence the project's logo bears similarity to the Ford motors one:

Black logo

Ford logo

Black similarly tries to be "one style fits all" with the only configuration option being line length - and you shouldn't change it! I think this is its success as a code formatter, which should be a tool to prevent bike-shedding.

The Black Code Style

The Black documentation has an extensive guide on the code style it implements. In general, it tries to appease pycodestyle, the checker for PEP 8 compliance. This is great for me, I've been running pycodestyle as part of Flake8 on my projects for years.

len(line) ?

Black's default line length is 88. Normally in Python I've found myself working with either a line limit of 80 or 120. The standard library has a limit of 79, based on the 80 plus space for diff markers.

The standard limit of 80 has apparently survived since punch cards and caused flamewars ever since.

The idea behind Black's 80 characters is to allow up to 10% above the limit, since it's no longer a hard limit and can be treated "more like a speed limit." The documentation recommends using the flake8-bugbear extension for Flake8 for line limit checking, so I've set that up.

'"' or "'" ?

Black also defaults to double quotes over single quotes. This was a slightly contentious decision.

Personally, I've mostly defaulted to typing single quotes over the years, even using flake8-quotes on one project to enforce this. But I'm not bothered - it's a small thing really, and I'm happy to give up I've mostly been using single quotes, which have been

Checklist

My projects are all fairly homogeneous so converting them was a matter of going through the same checklist on each. For an example PR, see my small-ish project pytest-randomly.

The steps were:

Fin

I consider Black a success for Python. I've spent many hours re-styling code to make it more comprehensible, and giving style feedback in code review. I'm looking forward to doing 99% less of that.

Hope this little review helps you learn about Black,

-Adam

20 Jun 2019 4:00am GMT