• An App Budget for America

    If you’ve been paying any attention to the news this week, you will of course know about the major snafu in the Iowa Democratic Caucus. It is now largely being blamed on what appears to be a bug in the app developed by a for-profit third party company. I’ve been reading the news around this with quite a bit of interest both because I’m interested in our democratic process, but also because I have a small bit of idea what the app developer is going through. I’ve been an engineer on apps since the early days of the Palm Pilot, way back in 1999. (They weren’t even called apps back then!) As such, I’ve been on hundreds of 1.0 product launches, many of which have been amazingly successful, but I’ve also had my share of painful and embarrassing failures so I certainly cannot throw too many stones from within my glass house.

    However, the thing that really caught my attention was the reported development cost of the app. It has been widely reported that the Iowa Democratic Party paid the app developer $63,000 for this product. In particular, I noticed more than a few news outlets reporting that it was a large sum of money. To an individual, $63,000 is certainly a lot of money but in the context of an app development project, it is a very meager, thin budget.

    As app developers, we don’t often talk about all the costs that go into a platform like this, but given the gravity of this app, I consider this to be far too small of a budget. If you’ve ever wondered what goes into developing an app and how much it might cost, I think this is a great opportunity to break it down. If we are following any sort of sane development model, we will do the following:

    1. Gather the user and technical requirements of the platform
    2. Design the user interface and user experience so that precinct captains who are going to have to use the app can do so with very little assistance
    3. Develop the backend server and database that the apps will talk to
    4. Develop some sort of dashboard or web interface for retrieving the data that has been uploaded by the apps
    5. Develop an iOS app
    6. Develop an Android app
    7. Test the apps and server in a controlled quality assurance environment
    8. Deploy the apps and server to a production environment
    9. Have a beta test period to validate the system is operating correctly and that users can download and utilize the product
    10. Launch the product!
    Given the above list of items needed to create the platform, we can reasonably estimate that, at a minimum, the team working on this should have consisted of an iOS app developer, an Android app developer, a server developer, a designer, a project manager and a quality assurance engineer. For an app like this, it's still a pretty bare bones team of six people.

    Now that we have the staffing established, let’s figure out how much time these folks are going to be working on the project. First, we need to establish a rough timeline. For an app like this, I would roughly estimate that it would consist of a month of up front work defining the requirements and interface followed by 2-3 months of active development, followed by at least 1 month of quality assurance. At that point, the app is hopefully well written and tested and ready to move to a beta test period which should, at a minimum, take 2-4 weeks. In the end we should have a project timeline that is going to run at least 5-6 months.  Now we need to figure out how much time each person on this team is going to spend working on this product.

    Let’s start with the project manager. In general, the project manager is going to be the source of continuity through the entire project. They are going to work heavily up front with the client (in this case the Iowa Democratic Party) and the designer defining the requirements and translating those requirements into a user interface that will meet the needs of the diverse population that will use this app. The users of this app are going to have an incredibly broad range of technical sophistication so it will be important that the UI/UX be done well. The project manager will then transition to part time during the development phase as they coordinate the many moving parts of the project. As the development phase wraps up, they will then re-engage as the product goes into the testing, deployment and beta testing phases. Given our assumptions above, they would likely spend about 1 month working with the client and the designer up front to define the product, then transition to approximately 1/4 time during the development phase and then transition to 1/2 time during the testing phase, followed by a full time engagement during beta testing. Roughly speaking, the project manager would need to be budgeted for about 3.25 months over the course of the project.

    While the project manager’s involvement is a bit dynamic, the designer is a little easier to estimate. They will likely spend the majority of their time at the beginning of the project creating the user interface and user experience for the app, and then they will spend a little time during the quality assurance phase ensuring that their initial designs were successfully translated to the product. Generally speaking, the designer should be budgeted for about 1 month during the project.

    Next up is the server/database developer. This one is a little more difficult to estimate since it is unclear where the caucus data needs to go once it’s uploaded, but at the very least we know that it needs to provide login and authentication functionality as well as data storage and reporting. In general, the server developer will need to define the interfaces that the apps talk to as well as provide development, test and production environments for the different phases of the project. As a conservative estimate, let’s assume that the server developer works for approximately 2 months during the course of the project.

    The app developers on the project will obviously need to be fully active during the primary development phase, but even more importantly, they need to be fully engaged during the testing phase and at least partially available during the beta testing phase. As a conservative estimate, we can budget them for approximately 3.5 months of the project. Since we need an iOS and an Android app, we need to budget a combined 7 months for the project.

    Finally, we need to consider the QA engineer’s contribution. In a perfect world, the quality person is a team of people and they are at least partially engaged from day 1 of the project. However, we will employ a conservative estimate and budget only a single person for just the testing phase of the project, which is 1 month.

    So with all of that out of the way, we can come up with the following time estimate:

    1. Project manager - 3.25 months
    2. Designer - 1 month
    3. Server developer - 2 months
    4. iOS developer - 3.5 months
    5. Android developer - 3.5 months
    6. Quality assurance engineer - 1 month
    That's a total of 14.25 person months of effort. If you consider a 40 hour work-week, that equates to approximately 2280 hours of effort. To calculate a total project cost you would simply multiply the number of hours times whatever your company’s hourly rate is. For this exercise, I will admit that I'm making some assumptions because I haven't seen the app in action. However, I've been doing this a long time and given the descriptions of the functionality I've heard reported in the news, I'm comfortable with my estimates. 

    I cannot say exactly what development plan this company followed and I cannot say how they staffed it, (although some details are starting to emerge). I also cannot say if the app developer had a longer play in mind when developing the app in terms of a profit model. For example, it is likely they could have charged the Iowa Democratic Party less than the actual cost of development with the intention of reusing the platform for other states and recouping more of their initial costs. (The Nevada Democratic Party apparently paid them for an app as well, so this is a likely possibility.) The one thing I can definitively say, however, is that for an app of this level of importance, $63,000 is not a very large budget. Given the vast amounts of money that campaigns raise, it seems like this would almost end up equivalent to a rounding error.

  • Seven Secret Benefits of Remote Work Revealed!

    Companies that embrace remote teams can reap numerous benefits: employee engagement drastically improves, employee retention increases, and the available talent pool grows immensely when not tied to a single geographical location.

    There are many benefits to remote employees as well. Some are obvious, but some are not so obvious. Below are seven benefits to remote employees that you may not know!

    1. Your compost and garbage bins will be emptier. When your home refrigerator is your work refrigerator, leftovers don't spoil nearly as often which means less going into your waste bins.
    2. Streaming movies will load more quickly and stutter less. The high-speed Internet you need for your video conferencing just so happens to also help your Netflix streaming in the evenings.
    3. You will help fight piracy. There is a special place in hell for porch pirates (people that steal holiday gifts from people's doorsteps.) During the holidays you will feel so much better knowing that you will definitely be home when UPS knocks on the door to deliver packages.
    4. You will be prepared when fashion styles from the previous decade come back in vogue. Because you don't have to keep re-investing in clothing for meetings or to impress co-workers, your wardrobe will last quite a bit longer. Eventually, that neon shirt that you wear on days when you don't have a video conference call will suddenly be chic.
    5. You will save immense amounts of money on personal hygiene products. Getting low on razors? You can push that stubble a little longer. Running out of foundation? No video calls today so no problem. Did you forget to get deodorant the last time you were at the store? Nobody can smell you on a conference call!
    6. If you have children, their grades will go up. Because you are consistently available to chaperone events at school, you will develop a rapport with your children's teachers which will inevitably lead to more lenient grading and better engagement at school
    7. You will improve national security. If you don't have a commute, you're not burning fossil fuels trying to get to work which means your country will be less reliant on foreign oil reserves.
     
  • Time, Relativity and Distributed Companies

    “The only reason for time is so that everything doesn’t happen at once.” -Albert Einstein

    One of the most difficult parts of communication within a fully distributed company is dealing with timezones. I can’t count the number of times that someone has emailed me asking “Can we have a meeting at 10 a.m.?” The obvious question here is which 10 a.m. are you asking about? Are you asking about your 10 a.m. or my 10 a.m.?  Unfortunately, it feels pedantic to ask the person to clarify what they mean, but it matters if you want everyone to show up at the same moment in time!

    As I’ve mentioned before, we use Slack in place of meetings for a lot of internal communication, but we do still need to jump on phone calls from time to time. One of the skills that I’ve had to learn when trying to setup meetings is to be very explicit about the time that you mean. Think it’s easy? Try this quick little quiz. Figure out what time it is right now in the following U.S. cities, without using a map:

    1. Las Vegas
    2. Nashville
    3. New Orleans
    4. Phoenix
    5. Detroit
    6. Cleveland
    7. Louisville
    8. Pittsburgh
    9. Milwaukee
    10. Boise
    I'm betting that while you might have fairly reasonable guesses, you're not 100% positive on all of them. A few of them (like Phoenix) are particularly tricky! My recommendation is that when working with people in different timezones, it's best if you declare the time and timezone not only for yourself but for the person you're trying to invite. For example, let's say that you are located in Austin, Texas and I am located in Portland, Oregon, and I want us to have a meeting at 11 in the morning. (Portland is in the Pacific Time Zone and Austin is in the Central Time Zone). I would probably ask you something like this:

         “Are you available for a call at 11:00 a.m. Pacific (1:00 p.m. Central)?"

    By communicating it this way, I’m communicating to you that:

    1. We are not both in the same timezone
    2. I would like to talk to you at 1:00 in the afternoon
    3. It will still be 11:00 in the morning for me
    This might seem like an obvious thing, but by being explicit, it helps remove any ambiguity and improves the overall quality of the communication. It also removes any assumption that the person you're communicating with understands what timezones different cities are located in. The most important skill that you can develop in a distributed organization is your ability to communicate, and one of the easiest ways to improve is to become better at communicating time and timezones when talking to people.
  • Distributed Companies Are Real

    When we first set out to build Silverpine, we didn't really have much of a plan. All we knew was that the fates had aligned, and that it was our time to set out on our own. From our very first day, we have managed to bootstrap the business which was ultimately very beneficial, however, bootstrapping is hard. Very hard. While we grappled with unknown cashflows and even more unknown project pipelines, we knew we had to scrimp and save and keep our costs as low as we possibly could. One major way we were able to do that was by making Silverpine a "virtual" business in that we had no physical office space. It also didn't hurt that neither my partner Ryan nor I wanted a commute, so it definitely felt like a win/win situation.

    For the first few years of our existence, our staff consisted of only Ryan and myself and an occasional subcontractor or two. Working remote became an unstated, simple to implement company policy that we grew to appreciate implicitly, and the freedom that it lent to us quickly became a de facto benefit. As we grew as a company, however, the true value began to emerge.

    When we finally hired our first full time employee, working remotely was still an implied benefit. At the same time, we started noticing a trend that many of the best engineers and developers that we knew were explicitly looking for new positions with significant remote work opportunities. However, when our first employee notified us that she was going to move to a rural area, it truly started to dawn on us what it meant for recruiting and retainment. Suddenly, this quirky company policy, that had just organically happened, had become an important pillar of our company culture.

    At that point, Ryan and I decided that we were going to commit to Silverpine being a fully distributed organization. We abandoned any intention of developing a physical footprint and started viewing our evolving company through that lens. As we continued to grow and hire, I had to unlearn some of the things that had been ingrained in me from my time in the corporate world and from my MBA classes. I had to really dig in to understanding the tradeoffs of being distributed, partially because we needed to adopt tools and policies that would work well for remote employees, but also because we needed to be able to speak to our clients about how we were different from similar agencies and ultimately, why our distributed nature would benefit them.

    For a long time, whenever a prospective client would ask us where we were located, I would make some sort of joke that we were following the "IBM model" even though it wasn't really an accurate comparison. I would then do some general hand waving about what that meant, but more often than not, I was left with the distinct feeling that we were sometimes viewed as not being a legitimate company. Because of my approach to communicating our structure, I'm certain that we lost more than a couple bids on projects because of this.

    Fortunately, as time progressed, many other companies started to legitimize remote work. Companies like Automattic, Basecamp, InVision and Zapier have literally written the book on how to have a remote team, and they have shown that it can work at scale. People have started to notice how these companies operate and thrive, and maybe most importantly, many of the best engineers and developers have started to view remote opportunities as a non-negotiable job requirement. I have run into people time and again at conferences and other work-related events where they explain that having a remote position is often times more important than a salary bump. That means that there is an actual, tangible economic value to a company that embraces remote work.

    For Silverpine, we have become better at articulating the legitimacy of our remote nature in a way that better portrays it as a competitive advantage. We talk about the engagement and happiness levels of our employees. We talk about the quality of communication that our team practices on a daily basis. And, we talk about lower base costs which translates to lower project costs. We also occasionally talk about the tools and the processes and the intentionality of it that helps craft our company culture. All of this is important in explaining our story and our organization because there are still plenty of people with an incorrect understanding of remote companies.

    I am convinced that the model we stumbled upon (but ultimately embraced) is a blueprint for long term success. It allows us a flexibility and nimbleness that other corporations simply can't match, and in the ever-changing world that we live in, flexibility is a survival trait. As the Japanese proverb states: "The Bamboo that bends is stronger than the Oak that resists."

    We are definitely still learning and adapting how we function and operate, but I no longer act sheepish or apologize for being a remote company. I am proud of what we are building and what Silverpine has become. (It also doesn't hurt that our track record is pretty great!) So, if you are thinking about working at a remote company or thinking about adopting remote-friendly policies, don't approach it as some odd-ball thing. Take some time and read about what/how other companies that are doing it, and recognize that distributed companies are real.

  • Tools for a Distributed Software Agency

    One of the things that I am most proud of is that Silverpine is a 100% distributed company. Often when people find out that we are fully remote, they will ask curiously about what tools we use to work together. This is completely understandable because the importance of having the right tool set is magnified for remote companies. We understand this innately and as such we are constantly evaluating our software stack. The following list represents the software that powers our business. (I have intentionally omitted some of the lower level development tools like Xcode and Android Studio.) The list is broken into four primary classifications: communication, development, project execution, and finance.

    Communication

    Slack 

    Before Slack, we used a hodge lodge of messaging tools like AIM, Google Chat and even old school SMS. It was horrible. Slack is the single most important tool that we use to communicate with each other and with our clients. All of our employees and contractors use it extensively every day, and even though I think that there should be some middle ground in their pricing between the paid and the pro plans, I can't imagine trying to work remotely without it.

    Webex

    Let me just preface this by clarifying that I think that every single conference calling platform is terrible. I have used them all. From Zoom.io to Google Hangouts to AT&T Connect, they are just barely workable. Besides the all too common call drops they also all seem to suffer from ridiculous installation processes and byzantine user interfaces (Does this yellow button state mean my microphone is on mute or can they hear me?)

    That being said, we have been using Webex for a very long time; not because it is good, but because it is better than the alternatives. And for our enterprise clients, it is somewhat of a known entity so we seem to spend less time per call doing the “can you hear me” dance. I wouldn’t say that I recommend Webex. It’s just what we use.

    Dropbox Pro

    We have been using Dropbox on personal plans for quite a while, but we recently decided to standardize on Dropbox Pro for file sharing. All of our projects have quite a bit of documentation, graphical assets and other large files that aren't well suited for source control tools. Dropbox allows us to create per project file drops that we can easily access as well as share with other people when appropriate. We almost switched to Box.com because their pro plans have unlimited storage but ultimately decided it would be less transitional headache to just upgrade our existing Dropbox plans.

    G Suite

    We have been using Google for our email and calendar services for so long that our silverpinesoftware.com domain is still functioning under the original beta operating agreement. If G Suite disappeared I honestly wouldn't even know where to start looking for a replacement. File this one under "it just works."

    Development

    InVision

    One of Silverpine's guiding design principles is that every user interface needs to have a beautiful "feel" to it, and that you simply can't judge the feel of an app until you can hold it in your hand and interact with it. Because of this philosophy,  we have refined our development process over time to rely heavily on InVision to prototype the UI and UX of our apps before we ever even start writing code. The amount of time and pain it saves both us and our clients cannot be overstated. If you design for mobile, you really should be using InVision or something like it.

    GitHub

    If you write software, you should be using a source control platform. If you need a source control platform you should be using GitHub. If you're using something else, I'm sure you have a reason for it, but it's probably not a very good reason. (All of our projects use GitHub repositories so when they changed their pricing model to be per user rather than per repository, it made our lives a lot easer.)

    Azure DevOps

    This one might surprise some people, but a couple years ago we transitioned to what is now known as Microsoft Azure DevOps for our automated build system and have been using it ever since. Prior to Azure DevOps we had used a variety of tools including TestFlight (bought by Apple), Fabric (bought by Twitter, then bought by Google), and BuddyBuild (ran out of money). Due to intense consolidation in that particular sector, we were frequently having to retroactively change our toolset which was both time consuming and costly. A friend of mine who works on the Microsoft tools team encouraged us to give Azure DevOps a try, and we have been extremely happy with that decision. Azure DevOps supports both iOS and Android, is massively configurable, has 24/7 support and most importantly, is backed by one of the largest companies in the world so it won't be disappearing any time soon. If you need an automated build system and haven't taken a look at Azure, I highly recommend at least kicking its tires.

    Project Execution

    Basecamp

    For many years, we wandered in the desert of project management, largely piggybacking on whatever project management tools our clients happened to be using at the time. As such, we have used everything from Jira to Asana to Microsoft Excel to track projects and tasks. However, in the past year we have implemented Basecamp as our standard internal project tracking tool. One of the things I like best about Basecamp is that it has clearly been thoughtfully designed. Not only is it powerful, but its design somehow works to ensure that it doesn't become overly burdensome in the same way that other similarly complex tools do.

    Lighthouse

    If there was one piece of web software that I would invest internal Silverpine resources on, it would be a lightweight bug tracking tool. There just aren't many platforms out there that can strike a balance of utility and ease of use that errs on the side of ease of use. For now, Lighthouse foots the bill for us in that regard, however, I'm not sure how much longer it will be around. There hasn't been any significant development done on it in the 6+ years that we've been using it, so I'm not sure I would necessarily recommend it. That being said, it does what we need bug tracking software to do, and it does it well, and I haven't found a replacement. If you have any personal favorites, please let me know.

    Finance

    Blinksale

    Silverpine is a services business and sending invoices to our customers is literally how we are able to make money. Blinksale is the tool we use to send those invoices and look like we are professionals in the process. While it isn't a complex tool, it expertly does what we need it to do: send and track professional looking invoices. If you send invoices to clients, you really should be using a tool like Blinksale because people can tell when you don't.

    Quickbooks

    Nobody really loves Intuit. They have created not one, but two near monopolies with TurboTax and Quickbooks. However, if you run a business, you need to track your finances in a way that your CPA can help you with your taxes at the end of the year, and if you tell your CPA that you use anything other than Quickbooks, they will not be happy with you and they will very likely take longer to do your taxes which means you will end up with a higher bill from them. That is the reality of Quickbooks and that is why we use it.

    Gusto

    If you have employees or sub-contractors that you need to pay, you really should be using Gusto. The folks at Gusto are wizards when it comes to dealing with payroll taxes and W-9's and a great deal more things that I simply don't have to worry about because we use their service. Not only is the Gusto platform super easy to use, but their customer service team is actually pro-active in notifying us of upcoming tax law changes that might affect us. I am continually in awe of how great Gusto is and cannot say enough good things about them.

     

  • Unintended Consequences

    Disclaimer: This post comes from my old blog back in 2004. I’m reposting it here so that I don’t lose the content. The source was hand-written HTML so the formatting probably appears a bit off.

    About seven years ago I wrote some code to do Mu-Law and A-Law compression. About six years ago, I decided to publish an article along with the source code for it. You can find it here. Anyway, the other day I received an email from someone who had taken it and modified it for what he was doing. In doing so, he found a piece of misinformation that has been in my article since I originally published it. Not a big deal, and I intend to rectify the issue. However, as we chatted over email, I asked him what he was using Mu-Law/A-Law compression for. Here is a clip from our email:

    > So if you don’t mind me asking, what are you working on that has 13 bit

    > unsigned audio input?

    Sure. I am designing a system that monitors lots of radio stations to capture and

    detect Emergency Alert System activations - those ugly tones and occasional

    voice messages you hear on the radio. The system has some rather incredible

    shortcomings for a critical system in 2005. When the emergency management

    office triggers an alert they have no way of knowing whether or not radio stations

    actually broadcast the alert. Sometimes the system fails - too often. So our

    system listens to the stations and sends a report back to Emergency HQ. In

    most cases an exception report that shows which stations did not properly

    send out the alert. So if the dam is breaking or the nuke is going critical they

    can try again, use the phone, send a helicopter or something.

     

    Whoa. Code that I originally wrote to compress audio in children’s games is now being used to help monitor Emergency situations. Talk about your unintended consequences!

    -Jon

  • Translating Hardware Exceptions to C++ Exceptions

    Disclaimer: This post comes from my old blog back in 2004. I’m reposting it here so that I don’t lose the content. The source was hand-written HTML so the formatting probably appears a bit off.

    No matter how careful of a programmer you are, there will always be times when a hardware exception will occur in your code. Perhaps it was a third party component that was the culprit. Perhaps, it was a fellow co-worker that broke something. Or maybe it was Microsoft itself not playing fair with its documentation and/or implementations. Whatever the case, it is often very useful to be able to capture a run-time exception that was generated by the CPU. Sure, you can use a catch(...) to be your fail-safe, but wouldn't it be great to be able to convert that exception that was generated by the hardware into a C++ exception? I created this class in order to do that very thing. In fact, this class was the basis for my super assert that I created, because I found that I could cause a hardware exception any time I wanted, and by using this C++ hardware exception container, I could access each thread's stack frame at run-time. This would eventually enable me to perform a stack trace inside of an assert, but I will explain that more in a different tutorial.
    Anyway, I hope that this is useful to someone. I spent a while digging around in the mire that is Microsoft's documentation before I put this together. Perhaps this will save someone else time in the future.
    Enjoy.
    -BossHogg

    #ifndef HARDWARE_EXCEPTION
    #define HARDWARE_EXCEPTION 1
    

    enum HWExceptionType { eIllegalMemoryAccess = EXCEPTION_ACCESS_VIOLATION, eUnexpectedBreakpoint = EXCEPTION_BREAKPOINT, eDataTypeMisalignment = EXCEPTION_DATATYPE_MISALIGNMENT, eSingleStepInstruction = EXCEPTION_SINGLE_STEP, eArrayBoundsExceeded = EXCEPTION_ARRAY_BOUNDS_EXCEEDED, eDenormalFloat = EXCEPTION_FLT_DENORMAL_OPERAND, eFloatDivideByZero = EXCEPTION_FLT_DIVIDE_BY_ZERO, eFloatInexactResult = EXCEPTION_FLT_INEXACT_RESULT, eFloatInvalidOperation = EXCEPTION_FLT_INVALID_OPERATION, eFloatOverflow = EXCEPTION_FLT_OVERFLOW, eFloatStackCorrupted = EXCEPTION_FLT_STACK_CHECK, eFloatUnderflow = EXCEPTION_FLT_UNDERFLOW, eIntDivideByZero = EXCEPTION_INT_DIVIDE_BY_ZERO, eIntOverflow = EXCEPTION_INT_OVERFLOW, ePrivelegedInstruction = EXCEPTION_PRIV_INSTRUCTION, eUncontinuableException = EXCEPTION_NONCONTINUABLE_EXCEPTION };

    class HWException { public: HWException(HWExceptionType aType, EXCEPTION_POINTERS* pExp): itsCategory(aType), itsPointers(pExp), itsLocation(pExp->ExceptionRecord->ExceptionAddress) { }

      HWExceptionType     GetCategory()  const {return itsCategory;}
      DWORD		      GetLocation()  const {return itsLocation;}
      EXCEPTION_POINTERS* GetSysPointer()const {return itsPointers;}
    
     protected:
          HWExceptionType	itsCategory;
      DWORD			itsLocation;
      EXCEPTION_POINTERS*	itsPointers;
    

    };

    static void HWTranslateException(unsigned int u, EXCEPTION_POINTERS* pExp) { throw HWException((HWExceptionType)u,pExp); }

    #endif

    /////////////////////////////////////////////////////////////////////// Example usage: ///////////////////////////////////////////////////////////////////////

    #include “windows.h” #include “HWException.h”

    int main() { //Note, setting the exception translator must be done //on a per thread basis. _set_se_translator(HWTranslateException);

    try {
    	//This will cause an access violation
    	char* ptr = NULL;
    	*ptr = 5; 	
    }
    catch (HWException& e)
    {
    	//We can now know both the type and the
    	//memory location of the instruction that
    	//caused the exception.  Cool!
    
    	HWExceptionType exceptionType = e.GetCategory();
    	DWORD address = e.GetLocation();
    }
    catch (...)
    {
    	//If we got here, then it was some other kind
    	//of C++ exception...
    }
    
    return 0;
    

    }

  • CPU Detection Code

    Disclaimer: This post comes from my old blog back in 2004. I’m reposting it here so that I don’t lose the content. The source was hand-written HTML so the formatting probably appears a bit off.

    CPU Detection Code

    I dug this code up from a project that I worked on a long time ago. Unfortunately, it is woefully out of date, especially with any of the latest P4 processors. Also unfortunately, I don't have a large suite of machines on which to test this, however, I have verified a large number of these, but not all. Also missing from the list are any AMD processors since my old companies didn't explicitly support AMD. Oh, well. As always, this code is to be used at your own expense, and I guess with this particular set of code, that means a little more. Anyway, I hope someone finds this interesting, and if you have any questions, feel free to ask.

    -BossHogg



    #include "windows.h"


    bool QueryCPUID();
    bool QueryMMX();
    bool QueryHyperThreading();
    void QueryVendorString(char* string);
    bool QuerySerialNumber(char* string);
    void GetCPUInfoString(char* string);
    unsigned long QueryCacheSize();
    unsigned long QueryCPUCount();
    unsigned char QueryCPUModel();
    unsigned char QueryCPUFamily();
    unsigned char QueryCPUStepping();
    unsigned char QueryCPUType();


    bool Is8086()
    {
    int is8086=0;

    __asm {

    pushf
    pop ax
    mov cx, ax
    and ax, 0fffh
    push ax
    popf
    pushf
    pop ax
    and ax, 0f000h
    cmp ax, 0f000h
    mov is8086, 0
    jne DONE_8086_CHECK
    mov is8086, 1

    DONE_8086_CHECK:
    };

    return !!is8086;
    }

    bool Is80286()
    {
    int is80286=0;
    __asm {
    smsw ax
    and ax, 1
    or cx, 0f000h
    push cx
    popf
    pushf
    pop ax
    and ax, 0f000h
    mov is80286, 1
    jz DONE_80286_CHECK
    mov is80286, 0

    DONE_80286_CHECK:
    };

    return !!is80286;
    }


    bool Is80386()
    {
    int is80386=0;
    __asm {
    pushfd
    pop eax
    mov ecx, eax
    xor eax, 40000h
    push eax
    popfd
    pushfd
    pop eax
    xor eax, ecx
    mov is80386, 1
    jz DONE_80386_CHECK
    mov is80386, 0

    DONE_80386_CHECK:
    };

    return !!is80386;
    }

    bool QueryCPUID()
    {
    int hasCPUID=0;

    __asm
    {
    pushfd
    pop eax
    mov ecx, eax
    and ecx, 0x00200000
    xor eax, 0x00200000
    push eax
    popfd
    pushfd
    pop eax
    and eax, 0x00200000
    xor eax, ecx
    mov hasCPUID, eax
    };

    return !!hasCPUID;
    }

    bool QueryMMX()
    {
    bool canDoMMX=false;
    __asm
    {
    mov eax, 1 ; request for feature flags
    _emit 0x0F ; CPUID on Pentiums is 0f,a2
    _emit 0xA2
    test edx, 0x00800000 ; is MMX technology Bit(bit 23)in feature
    jz DONE_MMX_CHECK ; flags equal to 1
    mov canDoMMX,1
    DONE_MMX_CHECK:
    };

    return canDoMMX;
    }

    bool QueryHyperThreading()
    {
    unsigned int regEdx = 0;
    unsigned int regEax = 0;
    unsigned int vendorId[3] = {0, 0, 0};

    if (!QueryCPUID())
    return false;

    __asm
    {
    xor eax, eax // call cpuid with eax = 0
    cpuid // Get vendor id string
    mov vendorId, ebx
    mov vendorId + 4, edx
    mov vendorId + 8, ecx

    mov eax, 1 // call cpuid with eax = 1
    cpuid
    mov regEax, eax // eax contains family processor type
    mov regEdx, edx // edx has info about the availability of hyper-Threading
    }


    if (((regEax & 0x0F00) == 0x0F00) || (regEax & 0x0F00000))
    {
    if (vendorId[0] == 'uneG' && vendorId[1] == 'Ieni' && vendorId[2] == 'letn')
    {
    return !!(regEdx & 0x10000000);
    }
    }

    return false;
    }


    void QueryVendorString(char* string)
    {
    char vendorId[12];
    __asm{
    mov eax, 0 ; request for feature flags
  • Disclaimer: This post comes from my old blog back in 2004. I’m reposting it here so that I don’t lose the content. The source was hand-written HTML so the formatting probably appears a bit off.

    MCI CD Control

    This is the MCI control code that I wrote for my UglyCD player. It is fairly full featured, but if you need more, feel free to modify it to your needs. I have neglected the error checking code, and every call to mciSendCommand should really be checked for its return value. As usual, this code is usable at your own risk. If you have any questions, you are always free to ask.

    -BossHogg

    #ifndef MCI_CONTROL
    #define MCI_CONTROL 1
    

    class MCIControl { public: MCIControl(); ~MCIControl();

          int     GetNumberOfTracks();
          void    Resume();
          void    Pause();
          void    Play();
          void    Stop();
          void    OpenDoor();
          void    CloseDoor();
    
          void    Goto(int track,int minute, int second);
    	
          int     GetCurrentTrack();
          int     GetCurrentMinute();
          int     GetCurrentSecond();
    
     protected:
          void    Init();
          void    SetTimeFormat();
          void    GetPosition(BYTE* track,BYTE* min,BYTE* sec);
    
     private:
          MCIDEVICEID   itsMCIDevice;
    

    };

    #endif




    #include "windows.h"
    #include "MCIControl.h"

    MCIControl::MCIControl() :
    itsMCIDevice(0)
    {
    Init();
    SetTimeFormat();
    }

    MCIControl::~MCIControl()
    {
    MCI_GENERIC_PARMS Info;

    Info.dwCallback=0;
    mciSendCommand(itsMCIDevice, MCI_CLOSE, MCI_NOTIFY, DWORD(&Info;));
    }

    void MCIControl::Resume()
    {
    MCI_PLAY_PARMS Info;
    BYTE track,minute,second;

    GetPosition(&track;,&minute;,&second;);
    Info.dwTo=0;
    Info.dwCallback=0;
    Info.dwFrom = MCI_MAKE_TMSF(track,minute,second,0);

    mciSendCommand(itsMCIDevice, MCI_PLAY, MCI_FROM|MCI_NOTIFY, DWORD(&Info;));
    }

    void MCIControl::Pause()
    {
    MCI_GENERIC_PARMS Info;

    Info.dwCallback = 0;
    mciSendCommand(itsMCIDevice, MCI_PAUSE, MCI_NOTIFY, DWORD(&Info;));
    }

    void MCIControl::Goto(int track,int minute, int second)
    {
    MCI_PLAY_PARMS Info;
    Info.dwCallback=0;
    Info.dwTo=0;
    Info.dwFrom = MCI_MAKE_TMSF(track,minute,second,0);

    mciSendCommand(itsMCIDevice, MCI_PLAY, MCI_FROM|MCI_NOTIFY, DWORD(&Info;));
    }


    void MCIControl::Play()
    {
    MCI_PLAY_PARMS Info;
    Info.dwCallback=0;
    Info.dwTo=0;
    Info.dwFrom = MCI_MAKE_TMSF(0,0,0,0);

    mciSendCommand(itsMCIDevice, MCI_PLAY, MCI_FROM|MCI_NOTIFY, DWORD(&Info;));
    }

    void MCIControl::Stop()
    {
    MCI_GENERIC_PARMS Info;
    Info.dwCallback = 0;
    mciSendCommand(itsMCIDevice, MCI_STOP, MCI_NOTIFY, DWORD(&Info;));
    }

    void MCIControl::OpenDoor()
    {
    MCI_STATUS_PARMS Info;
    Info.dwCallback=0;
    Info.dwItem=0;
    Info.dwReturn=0;
    Info.dwTrack=0;
    mciSendCommand(itsMCIDevice, MCI_SET, MCI_SET_DOOR_OPEN, DWORD(&Info;));
    }

    void MCIControl::CloseDoor()
    {
    MCI_STATUS_PARMS Info;
    Info.dwCallback=0;
    Info.dwItem=0;
    Info.dwReturn=0;
    Info.dwTrack=0;
    mciSendCommand(itsMCIDevice, MCI_SET, MCI_SET_DOOR_CLOSED, DWORD(&Info;));
    }

    int MCIControl::GetCurrentTrack()
    {
    BYTE track;
    GetPosition(&track;,NULL,NULL);
    return track;
    }

    int MCIControl::GetCurrentMinute()
    {
    BYTE minute;
    GetPosition(NULL,&minute;,NULL);
    return minute;
    }

    int MCIControl::GetCurrentSecond()
    {
    BYTE second;
    GetPosition(NULL,NULL,&second;);
    return second;
    }

    int MCIControl::GetNumberOfTracks()
    {
    MCI_STATUS_PARMS Info;
    Info.dwCallback = 0;
    Info.dwReturn = 0;
    Info.dwItem = MCI_STATUS_NUMBER_OF_TRACKS;
    Info.dwTrack = 0;
    mciSendCommand(itsMCIDevice,MCI_STATUS,MCI_STATUS_ITEM,DWORD(&Info;));

    return (int)Info.dwReturn;
    }

    void MCIControl::GetPosition(BYTE* track,BYTE* min,BYTE* sec)
    {
    MCI_STATUS_PARMS Info;
    DWORD MSF;

    Info.dwCallback=0;
    Info.dwReturn=0;
    Info.dwTrack=0;
    Info.dwItem = MCI_STATUS_POSITION;
    mciSendCommand(itsMCIDevice, MCI_STATUS, MCI_STATUS_ITEM, DWORD(&Info;));

    MSF = Info.dwReturn;

    if (track)
    *track = MCI_MSF_MINUTE(MSF);
    if (min)
    *min = MCI_MSF_SECOND(MSF);
    if (sec)
    *sec = MCI_MSF_FRAME(MSF);
    }

    void MCIControl::SetTimeFormat()
    {
    MCI_SET_PARMS Info;
    Info.dwCallback=0;
    Info.dwTimeFormat=MCI_FORMAT_TMSF;
    Info.dwAudio=0;

    mciSendCommand(itsMCIDevice, MCI_SET, MCI_SET_TIME_FORMAT, DWORD(&Info;));
    }

    void MCIControl::Init()
    {
    MCI_OPEN_PARMS Info;

    Info.dwCallback=0;
    Info.lpstrAlias=0;
    Info.lpstrElementName=0;
    Info.wDeviceID=0;
    Info.lpstrDeviceType=MAKEINTRESOURCE(MCI_DEVTYPE_CD_AUDIO);
    mciSendCommand(0, MCI_OPEN, MCI_OPEN_TYPE|MCI_OPEN_TYPE_ID, DWORD(&Info;));

    itsMCIDevice = Info.wDeviceID;
    }
  • Mu-Law and A-Law Compression Tutorial

    Disclaimer: This post comes from my old blog back in 2004. I’m reposting it here so that I don’t lose the content. The source was hand-written HTML so the formatting probably appears a bit off.

    	<table width="600" cellspacing="15"><tr>
    		<td width="100%">
    			<br>
    			<br>
    			<font size="3" face="Arial, Helvetica, sans-serif"><b>Overview:</b></font>
    			What are A-Law and Mu-Law compression?  In the simplest terms, they are 
    			standard forms of audio compression for 16 bit sounds.  Like most audio 
    			compression techniques, they are lossy, which means that when you expand them
    			back from their compressed state, they will not be exactly the same as
    			when you compressed them.  The compression is always 2:1, meaning that 
    			audio compressed with either of these algorithms will always be exactly half
    			of their original size.
    			<br>
    			Mu-Law and A-Law compression are both logarithmic forms of data compression,
    			and are extremely similar, as you will see in a minute.  One definition of Mu-Law is 
    			<br><i>
    			<br>          "...a form of logarithmic data compression
    			<br>          for audio data.  Due to the fact that we hear logarithmically, 
    			<br>          sound recorded at higher levels does not require the same 
    			<br>          resolution as low-level sound.  This allows us to disregard
    			<br>          the least significant bits in  high-level data.  This turns  
    			<br>          out to resemble a logarithmic transformation.  The resulting
    			<br>          compression forces a 16-bit number to be represented as an 8-bit 
    			<br>          number." </i>
    			<a href="https://web.archive.org/web/20040608152810/http://www-s.ti.com/sc/psheets/spra267/spra267.pdf">
    			(www-s.ti.com/sc/psheets/spra267/spra267.pdf)
    			</a>
    			<br>
    			<br>And from the comp.dsp newsgroup FAQ we also get this definition:
    			<br><i>
    			<br>          Mu-law (also "u-law") encoding is a form of logarithmic
    			<br>          quantization or companding. It's based on the observation that
    			<br>          many signals are statistically more likely to be near a low
    			<br>          signal level than a high signal level. Therefore, it makes
    			<br>          more sense to have more quantization points near a low level
    			<br>          than a high level. In a typical mu-law system, linear samples
    			<br>          of 14 to 16 bits are companded to 8 bits. Most telephone
    			<br>          quality codecs (including the Sparcstation's audio codec) use
    			<br>          mu-law encoded samples.
    			<br></i>          
    			<br>In simpler terms, this means that sound is represented as a wave, and humans
    			can only hear audio in the middle of the wave.  We can remove data from the upper
    			and lower frequencies of a sound, and humans will not be able to hear a significant
    			difference.  Both Mu-Law and A-Law take advantage of this, and are able to
    			compress 16-bit audio in an manner acceptable to human ears.
    			A-Law and Mu-Law compression appear to have been developed at around the
    			same time, and basically only differ by the particular logarithmic function
    			used to determine the translation.  When we get to the work of
    			implementing the algorithms, you will see that the differences are nominal.
    			The main difference is that Mu-Law attempts to keep the top five bits of precision, 
    			and uses a logarithmic function to determine the bottom three bits, while A-Law
    			compression keeps the top four bits and uses the logarithmic function to figure out
    			the bottom four.  Both of these algorithms are used as telecommunication standards,
    			A-Law being used mainly in Europe, and Mu-Law being used in the United States.
    			<br>
    			<br><b><i>DISCLAIMER:</i></b>
    			<br>Please understand that I am glossing over several of the details, but recognize that 
    			the entire purpose of this document is to make two extremely useful algorithms much 
    			more accessable to "average" programmers, like myself.
    			<br>
    			<br>
    			<br><font size="3" face="Arial, Helvetica, sans-serif"><b>Mu-Law Compression:</b></font>
    			<br>As you read this explanation, remember that the purpose of the algorithm is to
    			compress a 16-bit source sample down to an 8-bit sample.  The crux of Mu-Law
    			functionality is deciding which of the samples need to keep the most of their
    			precision.  Even the "most-important" sample will still lose precision.  It
    			simply becomes a matter of determining how much each sample loses, and minimizing
    			the loss on samples deemed "more important".  
    			<br>To generate a compressed Mu-Law sample from an uncompressed sample, the following
    			algorithm is applied to the 16-bit source sample.  
    			<br><i>(Please refer to the code listing for Mu-Law compression.)</i>
    			<br>
    			<br>First, the algorithm first stores off the sign.  It then adds in a bias value 
    			which (due to wrapping) will cause high valued samples to lose precision.
    			The top five most significant bits are pulled out of the sample (which has 
    			been previously biased).  Then, the bottom three bits of the compressed byte are 
    			generated using a small look-up table, based on the biased value of the source sample.  
    			The 8-bit compressed sample is then finally created by logically OR'ing together 
    			the 5 most important bits, the 3 lower bits, and the sign when applicable.  The bits 
    			are the logically NOT'ed, which I assume is for transmission reasons (although you 
    			might not transmit your sample.)
    			<br>
    			<br>
    			<br><b>MuLaw Compresion Code:</b>
    			<br>
    			<font size="1" face="Courier, Helvetica, sans-serif">
    			<br>const int cBias = 0x84;
    			<br>const int cClip = 32635;
    			<br>
    			<br>static char MuLawCompressTable[256] = 
    			<br>{
    			<br>     0,0,1,1,2,2,2,2,3,3,3,3,3,3,3,3,
    			<br>     4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,
    			<br>     5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,
    			<br>     5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,
    			<br>     6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,
    			<br>     6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,
    			<br>     6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,
    			<br>     6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7
    			<br>};
    			<br>
    			<br>unsigned char LinearToMuLawSample(short sample)
    			<br>{
    			<br>     int sign = (sample &gt;&gt; 8) &amp; 0x80;	
    			<br>     if (sign)
    			<br>          sample = (short)-sample;
    			<br>     if (sample &gt; cClip)
    			<br>          sample = cClip;
    			<br>     sample = (short)(sample + cBias);
    			<br>     int exponent = (int)MuLawCompressTable[(sample&gt;&gt;7) &amp; 0xFF];
    			<br>     int mantissa = (sample &gt;&gt; (exponent+3)) &amp; 0x0F;
    			<br>     int compressedByte = ~ (sign | (exponent &lt;&lt; 4) | mantissa);
    			<br>	
    			<br>     return (unsigned char)compressedByte;
    			<br>}
    			<br></font>
    			<br>
    			<br>
    			<br>
    			<br><font size="3" face="Arial, Helvetica, sans-serif"><b>A-Law Compression:</b></font>
    			<br>
    			<br>As mentioned earlier, A-Law compression is extremely similar to Mu-Law compression.  As you
    			will see, they differ primarily in the way that they keep precision.  The 
    			following is a short synopsis of the encoding algorithm, and the code example follows the
    			written explanation.
    			First, the sign is stored off.  Then the code branches.  If the absolute value of the source
    			sample is less than 256, the 16-bit sample is simply shifted down 4 bits and converted 
    			to an 8-bit value, thus losing the top 4 bits in the process.  
    			However, if it is more than 256, a logarithmic algorithm is applied to the sample to 
    			determine the precision to keep.  In that case, the sample is shifted down to access the
    			seven most significant bits of the sample.  Those seven bits are then used to determine the
    			precision of the bottom 4 bits.  Finally, the top seven bits are shifted back up four bits
    			to make room for the bottom 4 bits.  The two are then logically OR'd together to create the
    			eight bit compressed sample.  The sign is then applied, and the entire compressed sample
    			is logically XOR'd, again, I assume for transmission reasons.
    			<br>
    			<br>
    			<br><b>A-Law Compression Code:</b>
    			<br>
    			<font size="1" face="Courier, Helvetica, sans-serif">
    			<br>static char ALawCompressTable[128] = 
    			<br>{
    			<br>     1,1,2,2,3,3,3,3,
    			<br>     4,4,4,4,4,4,4,4,
    			<br>     5,5,5,5,5,5,5,5,
    			<br>     5,5,5,5,5,5,5,5,
    			<br>     6,6,6,6,6,6,6,6,
    			<br>     6,6,6,6,6,6,6,6,
    			<br>     6,6,6,6,6,6,6,6,
    			<br>     6,6,6,6,6,6,6,6,
    			<br>     7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7,
    			<br>     7,7,7,7,7,7,7,7
    			<br>};
    			<br>
    			<br>unsigned char LinearToALawSample(short sample)
    			<br>{
    			<br>     int sign;
    			<br>     int exponent;
    			<br>     int mantissa;
    			<br>     unsigned char compressedByte;
    			<br>
    			<br>     sign = ((~sample) &gt;&gt; 8) &amp; 0x80;
    			<br>     if (!sign) 
    			<br>          sample = (short)-sample;
    			<br>     if (sample &gt; cClip)
    			<br>          sample = cClip;
    			<br>     if (sample &gt;= 256)
    			<br>     {
    			<br>          exponent = (int)ALawCompressTable[(sample &gt;&gt; 8) &amp; 0x7F];
    			<br>          mantissa = (sample &gt;&gt; (exponent + 3) ) &amp; 0x0F;
    			<br>          compressedByte = ((exponent &lt;&lt; 4) | mantissa);
    			<br>     }
    			<br>     else 
    			<br>     {
    			<br>          compressedByte = (unsigned char)(sample &gt;&gt; 4);
    			<br>     }
    			<br>     compressedByte ^= (sign ^ 0x55);
    			<br>     return compressedByte;
    			<br>}
    			<br></font>
    			<br>
    			<br>
    			<br><font size="3" face="Arial, Helvetica, sans-serif"><b>Decompression:</b></font>
    			<br>Now, the most obvious way to decompress a compressed Mu-Law or A-Law sample would 
    			be to reverse the algorithm.  But a more efficient method exists.  Consider for a moment the 
    			fact that A-Law and Mu-Law both take a 16-bit value and crunch it down to an 8-bit value.  
    			The reverse of that is to take an 8-bit value and turn it into a sixteen bit value.  In the 
    			graphics world, it is extremely common to represent 32 and 24 bit values with an eight bit 
    			index into a palette table.  So, why not take a page from the world of graphics and use 
    			palettes for the Mu-Law and A-Law compression look up?  Sounds good to me.  In fact, 
    			these palettes will be smaller than their 24 and 32 bit cousins because we only need to 
    			represent 16 bit values, not 24 and 32.  In a nutshell, we will create static lookup 
    			tables to do the reverse conversion from A-Law and Mu-Law.  The two differing tables 
    			are presented below.  To convert from your compressed sample back to the raw 16-bit 
    			sample, just use your compressed sample as the index into the table, and the corresponding 
    			value in the table is your decompressed 16-bit sample.  Obviously, the downside is that 
    			this method requires the memory overhead for the tables, but each table is only 512 bytes.  
    			In this day and age, that's downright cheap for the absolute fastest decompression!
    			<br>
    			<br>
    			<br><b>Decompression Code:</b>
    			<br><font size="1" face="Courier, Helvetica, sans-serif">
    			<br>static short MuLawDecompressTable[256] = 
    			<br>{        
    			<br>     -32124,-31100,-30076,-29052,-28028,-27004,-25980,-24956,
    			<br>     -23932,-22908,-21884,-20860,-19836,-18812,-17788,-16764,
    			<br>     -15996,-15484,-14972,-14460,-13948,-13436,-12924,-12412,
    			<br>     -11900,-11388,-10876,-10364, -9852, -9340, -8828, -8316,
    			<br>      -7932, -7676, -7420, -7164, -6908, -6652, -6396, -6140,
    			<br>      -5884, -5628, -5372, -5116, -4860, -4604, -4348, -4092,
    			<br>      -3900, -3772, -3644, -3516, -3388, -3260, -3132, -3004,
    			<br>      -2876, -2748, -2620, -2492, -2364, -2236, -2108, -1980,
    			<br>      -1884, -1820, -1756, -1692, -1628, -1564, -1500, -1436,
    			<br>      -1372, -1308, -1244, -1180, -1116, -1052,   -988,   -924,
    			<br>       -876,  -844,  -812,  -780,  -748,  -716,  -684,  -652,
    			<br>       -620,  -588,  -556,  -524,  -492,  -460,  -428,  -396,
    			<br>       -372,  -356,  -340,  -324,  -308,  -292,  -276,  -260,
    			<br>       -244,  -228,  -212,  -196,  -180,  -164,  -148,  -132,
    			<br>       -120,  -112,  -104,   -96,   -88,   -80,   -72,   -64,
    			<br>        -56,   -48,   -40,   -32,   -24,   -16,    -8,     0,
    			<br>      32124, 31100, 30076, 29052, 28028, 27004, 25980, 24956,
    			<br>      23932, 22908, 21884, 20860, 19836, 18812, 17788, 16764,
    			<br>      15996, 15484, 14972, 14460, 13948, 13436, 12924, 12412,
    			<br>      11900, 11388, 10876, 10364,  9852,  9340,  8828,  8316,
    			<br>       7932,  7676,  7420,  7164,  6908,  6652,  6396,  6140,
    			<br>       5884,  5628,  5372,  5116,  4860,  4604,  4348,  4092,
    			<br>       3900,  3772,  3644,  3516,  3388,  3260,  3132,  3004,
    			<br>       2876,  2748,  2620,  2492,  2364,  2236,  2108,  1980,
    			<br>       1884,  1820,  1756,  1692,  1628,  1564,  1500,  1436,
    			<br>       1372,  1308,  1244,  1180,  1116,  1052,   988,   924,
    			<br>        876,   844,   812,   780,   748,   716,   684,   652,
    			<br>        620,   588,   556,   524,   492,   460,   428,   396,
    			<br>        372,   356,   340,   324,   308,   292,   276,   260,
    			<br>        244,   228,   212,   196,   180,   164,   148,   132,
    			<br>        120,   112,   104,    96,    88,    80,    72,    64,
    			<br>         56,    48,    40,    32,    24,    16,     8,     0
    			<br>};
    			<br>
    			<br>
    			</font>
    			<br>
    			<br><font size="1" face="Courier, Helvetica, sans-serif">
    			<br>static short ALawDecompressTable[256] = 
    			<br>{
    			<br>     -5504, -5248, -6016, -5760, -4480, -4224, -4992, -4736,
    			<br>     -7552, -7296, -8064, -7808, -6528, -6272, -7040, -6784,
    			<br>     -2752, -2624, -3008, -2880, -2240, -2112, -2496, -2368,
    			<br>     -3776, -3648, -4032, -3904, -3264, -3136, -3520, -3392,
    			<br>     -22016,-20992,-24064,-23040,-17920,-16896,-19968,-18944,
    			<br>     -30208,-29184,-32256,-31232,-26112,-25088,-28160,-27136,
    			<br>     -11008,-10496,-12032,-11520,-8960, -8448, -9984, -9472,
    			<br>     -15104,-14592,-16128,-15616,-13056,-12544,-14080,-13568,
    			<br>     -344,   -328,   -376,   -360,   -280,   -264,   -312,   -296,
    			<br>     -472,   -456,   -504,   -488,   -408,   -392,   -440,   -424,
    			<br>     -88,     -72,    -120,   -104,    -24,      -8,      -56,     -40,
    			<br>     -216,   -200,   -248,   -232,   -152,   -136,   -184,   -168,
    			<br>     -1376, -1312, -1504, -1440, -1120, -1056, -1248, -1184,
    			<br>     -1888, -1824, -2016, -1952, -1632, -1568, -1760, -1696,
    			<br>     -688,   -656,   -752,   -720,   -560,   -528,   -624,   -592,
    			<br>     -944,   -912,  -1008,  -976,   -816,   -784,   -880,   -848,
    			<br>      5504,   5248,   6016,   5760,   4480,   4224,   4992,   4736,
    			<br>      7552,   7296,   8064,   7808,   6528,   6272,   7040,   6784,
    			<br>      2752,   2624,   3008,   2880,   2240,   2112,   2496,   2368,
    			<br>      3776,   3648,   4032,   3904,   3264,   3136,   3520,   3392,
    			<br>      22016, 20992, 24064, 23040, 17920, 16896, 19968, 18944,
    			<br>      30208, 29184, 32256, 31232, 26112, 25088, 28160, 27136,
    			<br>      11008, 10496, 12032, 11520,  8960,   8448,   9984,   9472,
    			<br>      15104, 14592, 16128, 15616, 13056, 12544, 14080, 13568,
    			<br>      344,   328,   376,   360,   280,   264,   312,   296,
    			<br>      472,   456,   504,   488,   408,   392,   440,   424,
    			<br>      88,    72,   120,   104,    24,     8,    56,    40,
    			<br>      216,   200,   248,   232,   152,   136,   184,   168,
    			<br>      1376,  1312,  1504,  1440,  1120,  1056,  1248,  1184,
    			<br>      1888,  1824,  2016,  1952,  1632,  1568,  1760,  1696,
    			<br>      688,   656,   752,   720,   560,   528,   624,   592,
    			<br>      944,   912,  1008,   976,   816,   784,   880,   848
    			<br>};
    			<br></font>
    			<br>
    			<br>
    		</td>
    		<td width="25">
    		</td>
    	</tr></table>
    
  • Some Not-So-Random Oregon Statistics

    Recently, I heard a fascinating interview with Larry Krasner, the DA for Philadelphia. It really challenged some of my thoughts on incarceration and inspired me to do a little research, locally. Here are some interesting statistics that I found for Oregon:

    • 2017-2019 Dept of Corrections Budget: $1.76B ($880m annually) Source
    • Average Daily Prison Population: 14,835 Source
    • Average cost per inmate: $59,319 ($880m / 14,835)
    So who is in our prison system? Here are some other things I found: Source
    • White 74.5% (vs 76.4% of normal state population)
    • Hispanic 12.1% (vs 12.8% of normal state population)
    • Black 9% (vs 2.1% of normal state population)
    • 62.7% are over the age of 30
    • Male 81%
    • Female 19%
    Top 5 incarceration rates by county: Source
    1. Sherman County, 2.83 / 1000
    2. Marion County, 2.14 / 1000
    3. Jefferson County, 1.96 / 1000
    4. Linn County, 1.89 / 1000
    5. Clatsop County, 1.86 / 1000
    And what did they do to get there? Source
    1. Drugs 20%
    2. Assault 13.4%
    3. Other 12.6%
    4. Theft 9.3%
    5. Burglary 8.9%
    This also made me curious relative to education in Oregon. Here are a few statistics I found:
    • Average Teacher Salary: $59,204 Source
    • Total Number of Teachers: 22,357 Source
    • Median Class Size: 25 Source
    So here are some of my random thoughts after doing this research:
    • It's fascinating that every incarcerated inmate almost exactly equals a full time teacher. (These costs don't even include local and municipal jail costs.)
    • African American Oregonians are clearly incarcerated at a much higher rate than either caucasian or hispanic Oregonians.
    • It's ALARMING that 12.6% of the prison population is incarcerated for offenses labeled as "other". Take a look at the source for the incarceration cause breakdown and you can see how granular it gets which makes the "other" classification that much more troublesome. (Forgery weighs in at a whopping 0.4%)
    • The likelihood of incarceration has nothing to do with the population density that you live in. Only one of the top 5 counties in the per capita list would be considered urban.
    Does any of this mean anything? I'm not sure. I'll definitely be thinking about it for a while. In the alarmist era we currently find ourselves in, I find it helpful to have some actual data behind to fall back on. As such, I've included links to all of my sources in case anyone else feels so inspired to do some data spelunking. If you do, please share what you find!

     

     

  • A Coffee Retrospective

    2016-04-01 10.51.28Back in March, I was inspired by Manton’s Austin coffee quest and decided to try my own “30 Days of Coffee.” Starting on April 1 and continuing for 30 straight days, I would experience a new cafe or coffee shop that I had never visited before. The rules I set for myself were fairly simple:

    1. I must visit a coffee shop each day for 30 days.
    2. I can not have had coffee there previously.
    3. The coffee shop must reside within the boundaries of N/NE Portland. (I did this so that I could somewhat mimic Manton's challenge given the plethora of coffee shops in Portland vs. Austin.)
    4. In order to count as a "coffee shop" it must serve lattes (my drink of choice)
    I have now completed my 30 days, and I have to tell you that despite sounding easy, it's actually quite difficult to do ANYTHING for 30 days straight. There were more than a few times that Tiffany had to remind me that I hadn't made my daily trip (especially difficult since most non-chain coffee shops close at 4 p.m.)

    Along the way, I documented each visit on my microblog which you can find here. I also snapped a picture of each location that hopefully gave a small glimpse into the ambiance of each.

    This entire endeavor was an experiment in a number of areas, and I can honestly say that I learned a little bit about myself along the way. As I mentioned, I was microblogging the adventure and somehow ended up having a number of people who followed along. A few have even asked for a summary/recommendation list, to which I am more than happy to oblige.

    The following is my list of coffee shops that I would gladly return to again. The rankings are relative to me, which means that it’s based on a blend of coffee quality, ambiance, location and overall comfort (including my ability to work remotely). This is simply a ranked list out of the 30 that I visited, not my overall list for Portland. In fact, I think only one of these would crack my Portland top 10 list.

    So, with all of these caveats out of the way, here is my ordered list of places that I would gladly revisit:

    1. Blend Coffee - I cannot say enough good things about this place. From the cleanliness to the thoughtfulness of the seating to the ridiculous number of power outlets, everything about this place is well thought out. I only wish it was closer to my home. That being said, this is the only coffee shop on my list that I would go out of my way to visit. If you haven't been here, I absolutely recommend a visit.
    2. Bassotto - This place was an absolute gem of a find because it's actually a fantastic coffee shop disguised as a gelateria. It also doesn't hurt that it's located next door to the amazing Tamale Boy, but I think I'd come back even if it was located elsewhere. Finding Bassotto was one of the reasons that I did this challenge. It allowed me to find an awesome place that I normally wouldn't have tried on my own.
    3. Prince Coffee - This place is SMALL. It's also very new and in the beautiful Kenton neighborhood. I will revisit this place if for nothing else than their homemade stroopwafels.
    4. Miss Zumstein - This location is very comfortable, but their pastries are wonderful. The staff is friendly, and they have great coffee, but man are those pastries good.
    5. Locale Coffee - This is another of the new-wave Portland coffee shops located in the Mississippi neighborhood. I would definitely go back to it again, but there's nothing particularly special about it. If you're looking for a decent place to meet someone for coffee and you need to be near Mississippi, you can't really go wrong here.
    6. Saint Simon - This is yet another stereotypical new-wave Portland coffee shop. It has good coffee, is nice inside, has a good location, but is a little too "trendy" for my taste. Think I'm exaggerating? Everything from the moose head on the wall to the forced industrial look to the "wood block" seating just screams Portlandia. I'd definitely take coffee from here again, but I probably wouldn't stay for long.
    7. Seven Virtues - I didn't want to like Seven Virtues because I find the entire Zipper building to be somewhat pretentious and annoying, but I have to admit that it was pretty nice inside and they had good coffee to boot. I've heard from at least one person that they went here right after it opened and were very unhappy with their experience. Perhaps they had some initial issues getting going?
    8. Posie's Cafe - I'm not sure if I liked Posie's based on its own merits or because it's located in the Kenton neighborhood. Regardless, I found it to be very charming and a nice place to pop into if you're looking for a quick caffeine pick me up. They had a lot of seating and had some pretty good looking pastries as well.
    And that is it. Out of 30, I would revisit 8. Of the 8, only one of them would crack my top Portland Coffee Shops list. (I'll try to put that together soon as well.)

    As I mentioned, the challenge was more difficult than I expected given how often I go to coffee. In fact, I’ve started thinking differently about 30 day challenges in general and I have a few more I might try in the next few months. If you’d like to see the entire list of 30, I’ve included it below. You can also find my posts on Twitter or Facebook with the hashtag #pdxcoffeehunt.

    1. Cathedral Coffee - 7530 N Willamette Blvd
    2. TwentySix Cafe - NE 7th Ave
    3. Miss Zumstein5027 NE 42nd Ave.
    4. Saint Simon Coffee - 2005 NE Broadway
    5. Tiny’s Coffee - 2031 NE Martin Luther King Jr Blvd
    6. Fillmore Coffee - 7201 NE Glisan St
    7. Prince Coffee - 2030 N Willis Blvd
    8. Bison Coffee House - 3941 NE Cully Blvd
    9. Seven Virtues - 2705 NE Sandy Blvd
    10. Batter - 4425 NE Fremont St
    11. Kopi - 2327 E Burnside St
    12. Locale - 4330 N Mississippi Ave
    13. The Fresh Pot - 4001 N Mississippi Ave
    14. Goldrush - 2601 NE Martin Luther King Jr. Blvd
    15. Bassotto - 1760 NE Dekum
    16. Spielman Bagels - 2200 NE Broadway St
    17. Wholesome Blend - 4615 NE Sandy Blvd
    18. Cup & Bar - 118 NE Martin Luther King Jr. Blvd
    19. Heart Coffee Roasters - 2211 E Burnside St
    20. Blend Coffee Lounge - 2710 N Killingsworth St
    21. Cafe Eleven - 435 NE Rosa Parks Way
    22. The Arbor Lodge - 1507 N Rosa Parks Way
    23. Whole Foods Coffee Shop
    24. Extracto - 1465 NE Prescott St
    25. Coffee People - Portland Airport
    26. J Cafe - 533 NE Holladay St
    27. Coffee House Five - 740 N Killingsworth St
    28. Case Study - 1422 NE Alberta
    29. Posie’s Cafe - 8208 N Denver Ave
    30. Elevated Coffee - 5261 NE Martin Luther King Jr Blvd
     

     

  • The Trouble With Cross Posting

    Recently, I setup a microblog (http://jonhays.me) to supplement my interactions on Twitter, Facebook and Instagram. After several conversations with my friend Manton, I decided that I would setup a new WordPress site and use that as the content “repository” and use cross-posting tools to distribute out to Twitter, Facebook and Instagram.

    There are several reasons for doing this, but my primary motivations are:

    1. I want to spend less time on the individual social media platforms.
    2. I want to own my content and have better control over it.
    After a little bit of consternation over appropriate domain names and blog names, I set about assembling the site and my work flow.

    One of my high level requirements is that I wanted to be able to post to my microblog from my phone. Another of my requirements is that I wanted to be able to post entries both with and without images. Of course the third requirement that I mentioned above is that it needs to cross-post to Twitter, Facebook and Instagram.

    Naively, I just assumed I could use the WordPress app and setup a few IFTTT triggers and be done with it. As you can guess, it’s not quite as simple as that.

    Let’s tackle cross-posting of images first: what should be posted to Instagram for a blog entry with no image? With multiple images?

    Realizing that simply posting from WordPress wasn’t going to work, my first instinct was to modify my workflow so that if I wanted to post an image, I would post from Instagram and use an IFTTT action to then cross-post the image to my microblog which would then cross-post to Facebook and Twitter.

    Unfortunately, using this method is like playing a game of telephone with social media and the end result looks like this on Facebook:

    Screen Shot 2016-04-12 at 10.30.16 AM

    Several of my friends replied after seeing these asking if my computer got “hacked.”

    Images aren’t the only troublesome area either. Take for instance cross-posting of text posts larger than 140 characters to Twitter. It turns out that IFTTT is actually quite terrible about handling these as well:

    Screen Shot 2016-04-12 at 10.47.32 AM

    Simply truncating the text without a link back to the original post is the worst kind of tease and also not acceptable. What a good cross-post tool should do is truncate at word boundaries and provide a link back to the original post. IFTTT is simply not up to the task for this.

    In fact, as it turns out, IFTTT is actually quite terrible for cross-posting to just about every platform. I am investigating other alternatives at the moment but as of right now, I am still stuck with those terrible Facebook cross posts, and I have no way to post directly on the microblog and have the ones with images get cross-posted to Instagram.

    Fortunately, there are smart people working on these problems! I’m using a beta tool to do the cross-posting to Twitter. The beta tool actually works quite well and I’ve been trying to convince the author to expand to include Facebook as well, but he’s reluctant to add more features at the moment because he’s trying to launch.

    With so much out of control negativity and lack of author control on Twitter and Facebook, it feels like there is an opening for something like micro-blogs to augment existing platforms in a positive way. And while I don’t think Twitter or Facebook are going anywhere, I believe micro-blogs can help fill the gap for content creators that are conscientious about their craft.

    I’m still exploring other avenues for cross-posting and I’ll try to post updates as I find them either here or on the microblog, but it feels like this is a viable, mostly untapped market.

     

  • State of the Pines - 6 Months

    I cannot believe that it has been only 6 months since I took the leap of faith to try and turn SilverPine Software into something bigger than it had been! We have been so incredibly busy (in a good way) and sometimes I feel like my head is spinning with all the great things that are going on. Here are just a few of the highlights:

    • We have launched 8 iOS Apps, 3 Android Apps and 1 WindowsPhone App
    • We worked with a very talented designer to create a new and improved logo!
    • We continued to grow our open source Useful Utilities toolbox project as a gift back to the developer community
    • We have grown our team to include 9 amazingly gifted people!
    • One of our projects for a fortune 100 company has been featured in depth by the New York Times and referred to as a "game changer"
    • We purchased Photos+ from Justin Williams and re-launched it with native integration of Dropbox
    • We have had our projects featured by Apple not once, but twice!
    Whew! That's quite a bit for only half of a year. To say that some days my hair feels like it's on fire is an understatement. That being said, I wouldn't trade it for the world. The work we do is creative, challenging, cutting edge and very rewarding. Our clients are all amazing people with great ideas and I feel honored that we are able to help them create such amazing products.

    So what’s ahead? I can’t quite tell you yet, but I can say that we have some awesome stuff in the pipeline. We can’t wait to share it with everyone.

    It’s been a wonderful ride so far, and I’m really looking forward to finding out what the second half of the year looks like for us. Feel free to drop me a note if you want to chat about any of this or if you have an idea or product you’d like to discuss. Also, if you happen to be at Çingleton this year, make sure to say hi. (I’ll be the guy with the @cheesemaker shirt.)

    -Jonathan Hays

  • A (Brief) Guide to Cease and Desists for Indie Developers

    Before we go any further, the lawyers are making me post this part first: The following post is from my experience as a developer and I am in no way trained as a lawyer. Do not construe any of the following as legal advice. If you are in need of legal advice, consult a lawyer.

    Ok, now to the post.

    I have been developing apps for the iOS App Store since 2008 and as a result, I have many battle scars to show for my efforts. Unfortunately, the worst of these scars tend to come from lawyers. One particular blunt instrument that lawyers like to use is something called a Cease and Desist. These are very scary messages that are usually delivered via email but can often come in snail mail.

    Over the years, I have received at least 10 different Cease and Desists (including one from the infamous Doodlegate debacle) and have learned quite a bit along the way. Some of what I have learned has been from actual lawyers, and some from the good old school of hard knocks. My intent here is to share a little of what I have learned because at the end of the day, this stuff sucks and we’d much rather be dealing with bugs than lawyers.

    To start with, here is an example of one that I received last year:

    I am legal counsel at [REDACTED] and represent the authorized of the rights infringed by the apps described.

    [REDACTED] is the registered owner of both the [REDACTED] (and its French equivalent, [REDACTED]) and [REDACTED] Design trade-marks in Canada.  As such, it has the exclusive right to their use. 

    When the trade-mark [REDACTED] is used as a search term in the Canadian iTunes store, not only does our App appear, but the Apps of a number of other individuals/companies. 

    We would ask that individuals be prevented from using the [REDACTED] trade-mark as a “key word”, as this constitutes trade-mark infringement and could be the reason why other Apps are appearing when the trade-mark [REDACTED] is used a search term.  

    [REDACTED] can not tolerate, these individuals/companies benefitting from the tremendous goodwill associated with these marks.

    Most Cease and Desists follow a form similar to this. The entity that has protection for their intellectual property sends a sternly written message informing you that you need to fix/remove/change something. However frightening this might sound, a Cease and Desist is not the same as a lawsuit. You are not being sued. You are simply being informed that you need to make a change in accordance with someone else’s real or perceived protection of their intellectual property. So, what should you do? Here are a few things that I have learned along the way:

    1. Don’t panic. Despite the fact that these messages intentionally sound scary, you don’t need to be afraid. In the example above, phrases like “infringement” and “can not tolerate” make it sound like these folks mean business and are prepared to bring down the hammer of justice. But if you look more closely, you will see that usually these messages are form letters. Notice that nothing mentions the name of my company or even my name. In fact, there is really nothing of substance in the email. (We’ll come back to that in a bit.)

    2. Be polite and professional in your communication, but do not apologize or acknowledge fault. Just because you receive a C&D, it is still the responsibility of the claimant to show that you are at fault. Yes, you ultimately may be required to make a change but there are several things that need to be established first. Being polite and professional will go a long way in these types of issues. Additionally, do not immediately remove your app for sale or whatever it is they are requesting that you do. Doing so at this point would be acting with incomplete information, which leads to the third point.

    3. Ask for additional information. There are a variety of reasons to do this. The first is to signal that you have received their request and are acting in good faith. This is also to flag the fact that their claim is incomplete. As I pointed out, the above C&D is almost completely devoid of meaningful information. Here was my response to the C&D above:

    Hello Ms. [REDACTED],

    Would you please send either a scanned copy of proof of your trademark or send via postal service a hardcopy that clearly shows when the trademark was issued and under what jurisdiction it applies and we will be happy to comply.

    Sincerely,

    Jonathan Hays

    An excellent action is to ask for actual documentation of the patent, trademark or copyright. A few times, I have asked for documentation only to find that what they sent had absolutely no application to my app or that they were claiming to own something that they did not. If they cannot provide proof then they have no claim. Additionally, if you received the C&D through Apple Legal, make sure to cc them on all of your discourse with the lawyer. This helps to both keep the lawyers honest but also will help keep you in good standing with Apple. (It also provides a fairly neutral third party with a paper trail).

    1. Once you receive the documentation, the next step is to actually read it. This can be fairly dry reading, but I assure you that it is worth it and that it is no less obtuse than technical documentation on the latest APIs. For example, many patents have multiple claims in them. A great thing to do is to ask for clarification regarding which claims they are actually citing against you. This is especially important if the C&D you received was a form letter because it means that your company was collected in some large data sweep without anyone actually taking the time to look at your App. As with any bulk data collection, there can be errors. At this point, you may or may not want to consult a lawyer on your side, however it is certainly fine to make the people that sent you a C&D actually do their jobs by asking for more information. Here is how I responded once I received the documentation:

    Hello Ms. [REDACTED],

    Thank you for your reply.  Apple has asked us to make sure to include them in all exchanges and you did not include them on this so I am re-adding them.  That being said, I have a few questions that you have not yet addressed:

    1. I am trying to make sure that I fully understand which of the services you are describing is in conflict to make sure that we are in full compliance.  To be clear, I am asking [REDACTED] to explain which of the wares and services that [REDACTED] falls within. The services that are listed include SMS, printed publications, business directories, and Internet websites.  [REDACTED] is none of those so I am seeking clarification of your claims. Also, as you acknowledged in your email below, at least one of the documents that was sent over do not apply so obviously there is some confusion for [REDACTED].  I am simply seeking verification that a mistake has not been made by [REDACTED].
    2. I need to understand your claim that [REDACTED] is infringing within the application description because that is not accurate as nowhere in the application description does the word [REDACTED] or [REDACTED] occur.
    3. I asked previously if you are claiming IP protection only for sale within Canada.  I have yet to receive a response.
    Thank you,

    -Jonathan Hays

    5. Verify jurisdiction. Make sure that you understand where in the world they have permission to enforce their claim. The App Store is a global marketplace and unless they have protection for their claim in every country that you sell, you are only compelled to comply in the corresponding markets.

    Good morning Jonathan, 

    Yes, we are solely claiming IP protection for sale within Canada. In Canada, [REDACTED] has a registered trade-mark for [REDACTED], and the [REDACTED] & Design.  Under Canadian legislation this affords [REDACTED] with the sole and exclusive right to make use of the trade-marks in Canada and prevents any third party from making use of it in any context without [REDACTED]’s explicit permission even if the wares and services description is different.
     
    In making use of [REDACTED]’s trade-marks in the description and logos of your app, you are creating an association between our respective entities that will confuse consumers and lead them to believe we are somehow related.  Unfortunately, this is contrary to Canadian Trade-mark legislation.  As such, we would ask that you cease making use of the trade-marks in the description and logos of your apps, or cease distribution of the apps in the Canadian iTunes store.
     
    Best regards
    So in this particular case, the IP owner only had protection for their claims in Canada and therefore only sales in the Canadian App Store were in question. Ultimately, I resolved the issue by simply removing it for sale in Canada and the app continues to garner downloads in all of the other App Stores. If I had not asked their lawyer to clarify the claimed jurisdiction, I might have lost out on continued revenue in the other countries for absolutely no reason.
    As developers we generally avoid conflict. All things being equal, we prefer to make things. However, when we make things that we sell commercially, we often have to deal with lawyers and Cease and Desist requests. Always remember that these are requests, not legally binding demands. If/when you receive a C&D, do your due diligence. Be calm. Take measured steps. If all else fails, keep in mind that every time you send a request back to the lawyer on the other side of the C&D you are incurring billable hours to whomever is requesting the Cease and Desist. It's only seems fair that if you're going to lose time and money, that they be willing to do the work to back it up.
  • First Musings on WWDC 2014

    I’m writing this sitting in the San Francisco airport waiting for my return flight after a great week at Apple’s annual developer conference. It’s been an amazing week and I’m still processing a lot of what I experienced. First off, let me say that I believe that WWDC 2014 will be considered a turning point when viewed in retrospect. As a developer, I am absolutely giddy with all of the possibilities that Apple has opened up with their announcements. There is a stark contrast between last year’s developer conference and this year’s conference. At the end of last year, I was thinking about all of the things that I HAD to do with iOS 7. With iOS 8, I am absolutely bubbling with ideas of things that I GET to do. iOS 8 is going to be a big deal. From HealthKit to HomeKit to Extensions to CloudKit, Apple has paved the way for some amazing things to be built.

    And then there is Swift. Swift has taken my excitement about the new frameworks and cranked it to 11. Not only has Apple provided a sorely needed modern language to the platform, they have delivered it complete with a fantastic toolset. Clearly this has been in the works for a very long time. Swift development reminds me of the early days when I was first learning to program. Back in those ancient days, you simply turned on your computer and started typing. Swift very much has the same feel to it. In fact, I actually plan to have my 9 year old son sit down with me and learn it together.

    This is big stuff. I am still processing a lot of what I have seen and learned and will post more as I unravel it. I think this is going to be a great year to be an iOS and Mac developer!

  • SilverPine Software and Photos+

    "Leap and the net will appear."

    -John Burroghs

    Growing up, my father was a serial entrepreneur. I watched him go from business to business, sometimes with success, but often without. Among my memories of his many businesses are not one, but two Oregon perfume companies ("The Oregon Perfume Company" and "Oregon Scents"). Though I have always been fascinated and frightened of owning my own business, I think I've always known that I've had it in my blood.

    With that as the backdrop, I am thrilled to publicly announce the launch of my company SilverPine Software. Based in beautiful Portland, Oregon, SilverPine is primarily a consulting business focused on helping companies bring their mobile software to life.  It hasn’t been easy getting to this point, and I would be lying if I said I wasn’t worried about where we’ll be after a year or two. However, we have worked very hard to bootstrap this business and feel like the time is right to take the wraps off.

    In addition to consulting, we intend to slowly grow a portfolio of software. To that end, we are announcing today that we have purchased Photos+ from Second Gear Software. We have quite a bit of expertise with photo Apps (see Sunlit, among others) and when Justin Williams approached me about purchasing it from him, it felt like a great fit. We have big plans for Photos+ and have already put into motion the first phase of those plans: native Dropbox integration! Photos+ 1.1 is live on the App Store now so go check it out. As we roll out the next phases of the Photos+ roadmap, you will be glad that you got in early!

    New company, new software, new hopes and fears. In the end though, I’m pretty excited about what’s happening. Stay tuned for more news as the leap towards the net continues!

  • Free Code!

    For Sunlit 1.1 we decided to expand the import capabilities to include both Flickr and Instagram photos. One of my challenges in developing the interfaces was to simplify down the various functions available from Flickr and Instagram’s APIs and to try and provide a somewhat consistent interface across the two services for the app despite their differences. Writing this code was no mean feat for me as I had to implement OAuth not once, but twice because Flickr and Instagram use slightly different variations so I couldn’t simply reuse the code across the two classes.

    By the time I was done, I was relatively happy with the interface I had created, and in the spirit of openness and sharing with the community in which I develop, I have decided publish the source for querying and requesting data from both these services in my open source toolbox code. Affectionately named UUFlickr and UUInstagram, the classes are relatively simple to use, and you should be able to get you up and running quickly. The only moderately tricky part is updating your app to handle the URL callback mechanisms required by the OAuth implementations. I had fun developing this code, and hopefully someone will find this useful. If you’re interested in using it in your project but need a little help, feel free to ping me and I’ll see if we can’t figure it out.

    Enjoy!

  • What the Flappy Bird Knows

    The strange case of Flappy Bird seems to be all over the Internet right now. If you’re unfamiliar with the app, take a moment to educate yourself. Reading comments from its creator, it’s clear that the developer did nothing to promote his app, and obviously had no idea that the wave of success was coming his way. Anyone with a discerning eye that plays the game will also come away confused because quite bluntly, there’s nothing noteworthy here. Instead, what you see is a bizarre case of mob mentality charging through the App Store, and ultimately, I feel sorry for him. However, what concerns me most about this situation is that it highlights all the things that are wrong with the App Store and the most troubling aspects being that these two things have now become the accepted norm:

    1. The App Store is viewed as developer hostile
    2. Success on the App Store is more or less a lottery ticket.
    The App Store is Apple's ace-in-the-hole advantage in the Smart Phone platform wars and it should do everything it can to protect that advantage. As a case in point, I recently tried carrying a WindowsPhone for a week (more about that in a different post) and I found the device quite pleasing overall, but ultimately I couldn't get past the lack of essential third party apps. This is Apple's huge advantage, and as a platform, it is in Apple's best interests to treat the App Store as a meritocracy where the best of the best rise to the top. Right now, that isn't happening. I really hope that someone at Apple is paying attention.
  • Apple's Next Love

    Apple’s next product is going to be a smart band. Not a smart watch; a smart band. The difference is subtle, but significant: a smart watch implies that the device’s input is chiefly on its face and that its primary job is to display information to the wearer. As a long time smart band wearer, I can tell you that a smart band very rarely displays information and is much more important as an information collector, and this is where it gets interesting: Apple’s next product is going to convince you to put a device in direct contact with your skin. If they succeed in convincing you the things that can be done are limited only by the complexity of its sensors: heart rate sensors, respiration sensors, temperature sensors, vasculature visualization, non-invasive glucose monitoring, and more. This also falls in line with some of Apple’s recent hires: Roy J.E.M Raymann, Michael O’Reilly M.D., Nancy Dougherty, Ravi Narashimanetc. This has the potential to radically change healthcare in our world. Apple’s new device could revolutionize how we think about detection and prevention, and if you ask any medical expert they will tell you that prevention and detection is orders of magnitude more important than treatment.

    Manton Reece thinks that Apple needs to fall in love with their next product category, and I think he’s right, but I actually think that Apple has fallen in love with its next product which is exactly why it hasn’t launched yet. One of the problems with being in love with something that you’re developing is knowing when to ship 1.0. It needs to be right. It needs to be perfect. I have a feeling, though, that they’re getting close, and when they do finally announce it, we’re going to be amazed.

  • The Schemes of Sunlit

    One of the things that we established early on as a core principle for Sunlit was that we wanted to make sure we were focused on story telling and the sharing of stories and not on things on the periphery. Many times during development, we were tempted to create an über-camera or a whiz-bang photo editor, however the App Store is full of other apps that do these things and do them well. We do include a fairly simple camera in the app as well as some beautiful filters that can be applied, but what we really envisioned was people using either the built in camera app or a great third party app (such as Favd) to take their photos and then create a story with Sunlit using the pictures they had already taken.

    To us, the true beauty of Sunlit is how it pulls together the value created from other platforms (App.net, Dropbox, Ohai, etc.) and creates something that combines all those things into something that is somehow more moving than any of them alone. To that end, it was important that we provide a way for Sunlit to fit into this evolving macro-system because we recognize that the best apps and the best user experiences come when things work cohesively. We are always re-evaluating how we can do this best, but I am very happy that Sunlit launched with 1.0 support for a number of URL schemes, including support for the x-callback specification. We also have a number of actions that can be invoked externally to allow other apps to extend support for Sunlit into the activities that they do well. The URL schemes are documented here and are updated as we add support for other actions. If you are interested in adding support for any of these and need assistance, feel free to shoot me a message on ADN (@cheesemaker) or post your question in the Sunlit Glassboard Forum (invite code SUNLIT) or just shoot an email to support@riverfold.com

    Now go build something great!

  • Competitive Disappointment

    As we approach the impending Superbowl featuring the Pacific Northwest’s very own Seattle Seahawks, I felt it appropriate to explain why I am not at all excited about anything other than the commercials. You see, if you grew up in the Pacific Northwest and you are a sports fan of any kind, then you are familiar with a pattern of sports teams that excel in the regular season only to fall apart when it matters. I’m sure that somehow this has seeped into my psyche in ways that I don’t recognize. Lest you think I exaggerate, I leave the following for your consideration:

    • 1986 - Portland Trailblazers use #2 draft pick to select Sam Bowie, passing over a rookie Michael Jordan
    • 1989 - Gary Payton and OSU appear on the cover of Sports Illustrated as the #1 team in the country. They stay in the top 10 all year, but fall to Ball State in the first round of the NCAA tournament. (BALL STATE! I don't even know where Ball State is!!!)
    • 1990 - Portland Trailblazers lose to the Detroit Pistons in the NBA championship.
    • 1992 - Portland Trailblazers lose to the Chicago Bulls in the championship despite being up by 15 in the fourth quarter with Michael Jordan on the bench. (See here)
    • 1994- Seattle Supersonics hold league best regular seaon record with 63-19, but lose in the first round to the Denver Nuggets
    • 1996 - Seattle Supersonics set franchise record with 64 wins and advance to the NBA championship series, only to lose to the Chicago Bulls in six games.
    • 2000 - Portland Trailblazers are up by 15 in the fourth quarter in game 7 of the Western Conference finals but fail to score and give the Los Angeles Lakers the series. The Lakers go on to win the championship in 5 games. (See here)
    • 2001 - Seattle Mariners set single season win record with 116 wins, yet fall to the New York Yankees in the American League Championship, 4 games to 1.
    • 2002-2006 - University of Oregon makes five straight bowl games and loses all of them.
    • 2003 - University of Oregon men's basketball team wins the Pac-10 championship and enter the NCAA tournament with a #8 seed, only to lose in the first round to Utah.
    • 2005 - Seattle Seahawks go 13-3 in the regular season, and appear in Super Bowl XL as heavy favorites. They go on to lose 21-10 to the #6 seed, wildcard Pittsburgh Steelers.
    • 2007 - University of Oregon football team is ranked #2 in the nation, but quarterback Dennis Dixon tears his ACL with only 3 games to go ending hopes of a national championship.
    • 2007 - Amazingly, the Portland Trailblazers repeat their gaffe from 1986 and draft Greg Oden instead of Kevin Durant. Oden misses entire first season due to injuries and is eventually waived.
    Also for your consideration...

    The 10 best athletes to never win a championship in the Pacific Northwest:

    1. Clyde Drexler (After a hall of fame career in Portland, won a championship only after being traded to the Houston Rockets)
    2. Steve Largent (Played 13 seasons for Seahawks, held almost every record a receiver could hold)
    3. Randy Johnson - (Never won with the Mariners in 10 seasons, but won with the Arizona Diamondbacks after being traded.)
    4. Rasheed Wallace (Won championship in Detroit the year after being traded from Portland)
    5. Gary Payton (12 seasons with the Supersonics, won championship after being traded to the Miami Heat)
    6. Scottie Pippin (Won six championships in Chicago, none in Portland despite making the NBA finals in 2000)
    7. Ray Allen (Five seasons with the Sonics. Won championship with Boston Celtics the following year after being traded.)
    8. Ken Griffey Jr. (13 time all star, #6 all time home runs leader, no championships)
    9. Alex Rodriguez (6 years with the Mariners, #5 all time home runs leader, 14 time all star)
    10. Walter Jones (9 time pro-bowl, NFL 2000s All-Decade Team, considered one of the best lineman of all time)
    Honorable Mention: Cortez Kennedy(Seahawks), Shawn Kemp(Sonics), Shaun Alexander(Seahawks), Ichiro Suzuki(Mariners)

    So there you have it. The Pacific Northwest has the capacity to produce fantastic sports teams and franchises, but they leave a 30 year legacy of ultimate disappointment. Am I proud of the Trailblazers and the Seahawks and the Mariners? Sure. They’re my teams. But you’ll have to excuse me if I don’t hold my breath on this whole Superbowl excitement.

  • Sharing About Overshare

    As Manton explained on his blog, we have been working on Sunlit for a long time. During that time, I can tell you that Sunlit changed and evolved from what we originally had envisioned. However, if you listen to Core Intuition, you know that we’ve been beta’ing the app for quite a while. So, when Jared Sinclair and Justin Williams announced the Overshare Kit open source replacement for iOS UIActivityViewControllers, we had a hard decision to make. We had to decide if it was worth adding new functionality at such a late point in our development or if we could live without it.

    Now, I don’t want to get into a UI/UX design war with anyone. I know that there are various, differing opinions on the look and feel of iOS 7 and I am but a humble developer. However, if you have ever implemented a custom UIActivity in iOS it is undeniable that the monochrome icons that you are forced to use look inexcusably ugly. In fact, they are so disappointingly ugly that Manton and I felt that we had no choice but to use Overshare Kit.

    photo

    I am proud of many things that we’ve built into Sunlit and our support for Overshare Kit is one of them. I love the iOS development community and Jared is a great guy so being able to support a project that he owns and cares about is a good feeling. We also had the opportunity to contribute directly to the project and a number of changes that we made for Sunlit made it back into the master branch. To me, this is the heart and soul of iOS development. It’s doing things with other people that share your value of quality.

    If you are a developer, and you haven’t checked out Overshare Kit, you really should. It allows you to create beautiful sharing activity views. It comes with some of the most common ones out of the box (Facebook, Twitter, etc) and is fairly easy to extend. Jared is also very responsive with suggestions/fixes for things, and while it’s only been around for a few months, I can’t recommend it highly enough. Oh, and if you’re having trouble integrating it, feel free to hit me up. I’d love to see more apps using it.

  • Sunlit Is For Me

    If you don’t like my new app Sunlit, I will still be happy. Don’t get me wrong. I hope you like Sunlit and it helps you do things with your photos and memories that you couldn’t do before, but if you don’t like it, I’ll still be happy. I’ll be happy because Sunlit is for me.

    Sunlit is an app that I’ve wanted for years and I’m thrilled to finally have on my phone. One of my absolute favorite uses for Sunlit is for capturing memories from smaller events. These are events that while important, don’t tend to have the significance of a wedding or Disneyland trip so the photos often never see the light of day. With Sunlit, however, I can now do something meaningful with these photos.

    Earlier this year while Sunlit was in beta, I went to a birthday party for my wife’s mother. There were about 15 people present and the celebration went on for around 2 hours. During that time, I snapped close to 60 photos of the party. Some of these photos were blurry, some of them were decent, but a few were quite good. On our drive home, I picked the best ones and created “Nana’s Birthday” story. I then used Sunlit to publish “Nana’s Birthday” and suddenly I had a story captured on a web page that I could share with my wife’s family quickly via an email or SMS message.

    This is an important thing to me. Before Sunlit, these photos were just photos. The effort to take them was almost more work than it was worth because they would rarely be seen by me, let alone anyone else. With Sunlit, photos turn into memories. They turn into something beautiful that I can share and that can be appreciated and loved by the people I care about.

    So, if you don’t like Sunlit, that’s ok. Sunlit is for me, but Sunlit is also for sharing so I really do hope you like it too.

  • Thanks for the Memory

    How much memory can an app allocate on an iPad 1? It seems like a trivial question. The original iPad has been in circulation for over 3 years now and developers have written thousands of apps, many of which are memory intensive. Given this, one would expect that this limit has been well documented and should show up easily in search results. As I found out recently, this is not the case.

    A few weeks ago, I needed to revisit memory consumption on an app running on an iPad 1, and I became very curious about the answer to this question. After searching various sites I was mostly coming up empty. To my surprise, it was quite difficult to find this documented anywhere. Apple's own documentation is of course great reading for any developer but remains mum on memory budgets. Perhaps the best documentation I could find was this Stack Overflow post, but it didn't seem to be definitive and as with all Stack Overflow posts, caveat emptor.

    One thing that I did find sprinkled around the various posts on the topic was a link to Jan Ilavsky's tool that he wrote to measure on a device the various points when an app receives memory warnings and then ultimately crashes due to insufficient memory. Here's a shot of it in action. Using Jan's tool, I decided that perhaps I should help contribute to the collective information of the Internet by running some tests and documenting them.

    So as not to boil the ocean, I decided to analyze only the iPad family of devices. My test devices included: a first generation iPad, a second generation iPad, a third generation iPad and an iPad Mini. All of the devices were upgraded to the latest version of iOS that supported them*. My procedure was to force quit all apps on the device before running the test app and then to run the test a minimum of 10 times on each device. I would then throw out the low and the high and graph the results. I found to be both interesting and somewhat predictable:

    [caption id="attachment_58" align="aligncenter" width="300"]First Generation iPad First Generation iPad (iOS 5.1.1) [/caption][caption id="attachment_59" align="aligncenter" width="300"]Second Generation iPad (iOS 6.1.3) Second Generation iPad (iOS 6.1.3)[/caption][caption id="attachment_93" align="aligncenter" width="300"]iPad 3 iPad 3 (iOS 6.1.3) [/caption][caption id="attachment_61" align="aligncenter" width="300"]iPad Mini (iOS 6.1.3) iPad Mini (iOS 6.1.3)[/caption]

    One of the more interesting things that jumped out at me is the fact that Apple seems to take memory optimization seriously, and that their approach appears to not be a one size fits all method for the different devices. Notice how on the second generation iPad and the iPad Mini the OS continues to optimize in order to allow the app more room in which to operate.  Contrast that with the first and third generation iPads which appears to have a flat, non optimizing algorithm.  If indeed memory optimization does have a certain amount of device specificity to it, it would appear that Apple put less time into optimizing these iPads.

    This of course makes sense as the first iPad was a previously non-existant category, and the third generation was the first iPad with retina display. You have to imagine that it was a pretty massive undertaking to introduce the retina concept on an OS, toolchain and device level. I don't think any engineer alive would be surprised if the schedule became tight on these projects. The point here however, is that Apple does indeed pay attention to the details and that permeates all the way down to device specific memory optimizations. Developers should of course never become reliant on the presence of these optimizing algorithms but the fact that Apple puts that much attention into it is impressive.

    So, back to my original question of how much memory you can allocate on an iPad 1.  Drumroll please. Based on my results, I would say that for the iPad family of devices the following are the maximum allocations that can be performed by an app:

    First Generation iPad: 160 MB

    Second Generation iPad: 250 MB**

    Third Generation iPad: 515 MB

    iPad Mini: 275 MB***


    Now, I wouldn't be doing my civic duty if I didn't point out that these numbers include things that are entirely out of your control such as core graphics objects.  The test application is the most plain vanilla of apps so the minute you start making anything interesting this number will begin to be impacted. In the end, there is simply no replacement for good old fashioned testing and optimizing so keep that in mind as you're setting out to make the next Angry Birds. I do find this helpful however because it is useful to know and be aware of what my overall ceiling is so that I can spend time optimizing the things that need optimizing and spend the rest of the time building features.

    * iOS 5.1.1 on the iPad 1 and iOS 6.3.1 on the other devices
    ** The 2nd gen iPad appears to be able to go as high as 295MB
    *** The iPad Mini appears to be able to go as high as 315 MB

subscribe via RSS