Regarding the Freedom of Information “hack”

sdfsdfsdfsdf By evan on Apr 12, 2018


There is now a legal defence gofundme started by one of the CanSecWest organizers. Please donate what you can. This is a very important case, the government can’t be allowed to get away with this.

CBC has granted the teenager anonymity, but Jack Julian has a very good report on what happened from the teenagers point of view. 


Nova Scotia’s FOIPOP web service, much to the chagrin of reporters, has been unavailable for the better part of a week. Ironically, not much information has been provided on why. Today HRP and the Minister of Internal Affairs announced the web service had been “compromised” and a suspect was in custody. I’ll leave coverage of the subsequent political posturing to the news media, and instead focus on the actual attack and the implications this case has for security research in general.

The FOIPOP Webservice

Before I get into the details, I should explain what the provincial FOIPOP web service is and how it works.

It is a government-owned, subcontractor-run portal to pay for and receive FOIPOP reports. As a citizen, or as a reporter, you can pay $5 fee, and get access to government documents, from the normal course of their business, that are by and large considered to be public. In fact, there is a law, the Freedom of Information and Protection of Privacy Act that ensures those records are available to the public, with some restrictions. Those restrictions largely surround personal information. For example, I can request information about a project, but not about a person unless that person is me (or has given permission.)

Let’s get an idea of the scope of this breach. According to Global News,

On April 6, Unisys informed the province that between March 3 and March 5 more than 7,000 documents  were accessed and downloaded by a “non-authorized person.”

The province says that 250 of the documents contain highly sensitive personal information such as birth dates, addresses and social insurance numbers.

This implies there were 6750 documents that did not contain “highly sensitive” personal information and 250 that did.

As Tim Bousquet at the Halifax Examiner reported:

Part of my routine for writing Morning File is to daily check various government websites for new activity — provincial and federal tender offers, orders in council, and the Freedom of Information Office’s disclosure log.

That last is a bit of reporting theft — we reporters can see what each other has been working on, as the FOI office posts the disclosures given to other reporters two weeks after they’ve been released. More importantly, citizens can use the site to easily make their own Freedom of Information requests, pay the $5 application fee, track their requests, get an electronic record when the information is released, and like reporters do, look at other releases.

Considering 6750 of the documents did not contain “highly sensitive” personal information, and were therefore literally published publically by the government, that would imply to me that the actual scope of the breach is limited to 250 records.

The Attack

An unnamed 19-year-old man from Halifax (I’m calling him Mr. Big) was arrested, interrogated, and charged yesterday in relation to a “breach of a provincial government network” and was subsequently charged with “Unauthorized Use of Computer” which carries a penalty up to 10 years. As Deputy Minister Jeff Conrad told Global News

“There’s no question, this was not someone just playing around”

It would appear the government is not “playing around” either considering this charge carries the same maximum sentence as both rape, and creating child pornography.

We’ve established that 250 records were “highly sensitive,” the question is how did Mr. Big retrieve them? Surely the provincial government does it’s best to protect “highly sensitive” documents from hackers. Right?

The Exploit

I wish I could say the exploit was advanced. That it was complicated, that it was novel, or new; That the province simply had no chance against this bastion of elite hacker skills. The problem is I can’t even call it an exploit with a straight face. Ernie and Bert probably explain best.

The way the documents are stored is simple. They’re available at a specific URL, which David Fraser, a Halifax-based privacy lawyer, was happy to provide:

Document number 1235 is stored at

Guess where document 1236 is stored? This is not a new problem. In fact, it was recognized over a decade ago as one of the top ten issues affecting web application security. All Mr. Big had to do is add.

The software is manufactured by a company called CSDC Systems. As CBC reports;

“This is an isolated incident and no other CSDC products or customers have been impacted,”

I was able to find several American cities using the same software, and they all work the same. That would imply the system is working as designed. I believe them when they say the issue is isolated to NS because this is not an issue with the software but how it’s use by the province.

These two sites are very interesting, because they use the same software, but are in a subfolder called “PublicPortal.” We’ll get back to that.

You can find them yourself, simply google “inurl:attachmentRSN”. Try it out, and you’ll notice the first few results are from none other than

I later found the same URL on the NS NDP website. The link doesn’t currently work as the province took the system down. That being said, Google was able to index and cache, several FOIPOP requests. This document specifically, number 7433, appears to have all contact information redacted, which imply it’s one of the ones explicitly posted for public consumption and representative of 6750 of the 7000 files.

To be crystal clear, Google able to access and continues to host several of the same documents Mr. Big is facing charges over.

The Charges

What are the actual charges? From the Canadian Criminal Code (emphasis mine):

Unauthorized use of computer
342.1 (1) Everyone is guilty of an indictable offence and liable to imprisonment for a term of not more than 10 years, or is guilty of an offence punishable on summary conviction who, fraudulently and without colour of right,
(a) obtains, directly or indirectly, any computer service;
(b) by means of an electro-magnetic, acoustic, mechanical or other device, intercepts or causes to be intercepted, directly or indirectly, any function of a computer system;
(c) uses or causes to be used, directly or indirectly, a computer system with intent to commit an offence under paragraph (a) or (b) or under section 430 in relation to computer data or a
(d) uses, possesses, traffics in or permits another person to have access to a computer password that would enable a person to commit an offence under paragraph (a), (b) or (c).

In order to secure a conviction, the crown would have to prove beyond a reasonable doubt that the access was fraudulent.

Just as this isn’t a new problem, it’s not the first time it’s been before the courts. There are two very high profile cases.

The first, Aaron Swartz, the inventor of RSS downloaded millions of journals from a server at MIT.

“Aaron’s death is not simply a personal tragedy. It is the product of a criminal justice system rife with intimidation and prosecutorial overreach. Decisions made by officials in the Massachusetts U.S. Attorney’s office and at MIT contributed to his death,” his family said.

Sadly, he killed himself while being railroaded by the US justice system.

The second, Andrew Aurenheimer,  was not only charged but convicted of an offence under the US Computer Fraud and Abuse Act. This exploit was almost identical to the FOIPOP issue at hand.

After being sentenced to 3 years in prison, and serving part of it, Aurenheimer’s case took an interesting turn. It was overturned by the US Court of Appeals.

It gets even more interesting, because according to the EFF (emphasis mine)

 Although it did not directly address whether accessing information on a publicly available website violates the CFAA, the court suggested that there may have been no CFAA violation, since no code-based restrictions to access had been circumvented.

The Defense

The question remains, was the access fraudulent?

Remember what I said about the other installations being called “PublicPortal”? And how 6750 of the 7000 records were public anyways, and how this system is literally designed for facilitating “access to information?” Looking at it further, there are no authentication mechanisms, no password protection, no access restrictions. It’s very clear that the software is intended to serve as a public repository of documents.

It’s also very clear that there at least 250 documents improperly stored there by the province. Documents that the province had a responsibility to protect, and failed.

Mr. Big asked for a document, the server returned it, as it’s supposed to. Then asked for them all, and unluckily for him, 250 of the 7000 were “confidential.” He didn’t even try to hide, apparently having been traced by his IP address.

Was that access fraudulent? It’s for the courts to decide, but I would argue no.

Had this system been audited, or looked at by any reasonably competent security professional, this would have been fixed before it became national news and an embarrassment to the province.

An interesting question to consider; was Mr. Big even the only one to discover the flaw? From Global News:

“The employee was involved in doing some research on the site and inadvertently made an entry to a line on the site – made a typing error and identified that they were seeing documents they should not have seen,” Deputy minister Jeff Conrad told a technical briefing.

The government’s official position is that the flaw just happened to be rediscovered last week by a miscellaneous staffer. Apparently, when they raised the issue, the technical team discovered Mr. Big in the logs from a month prior.

They haven’t announced charges against the staffer, so presumably, they don’t consider that manipulation to be “fraudulent.”

Disclosure Theory

I have personally disclosed a vulnerability to the Province of Nova Scotia before, about 2 days before CBC picked up the story of a Russian website broadcasting webcam videos of children in a public school. It was surprisingly difficult to find someone to disclose it to. No one was willing to talk about it, or knew who should handle it. I eventually, via a friend at shared services, got in touch with someone who would take the report. They took it very seriously once the news broke.

To be clear, this is speculation, but it isn’t an unreasonable theory that Mr. Big disclosed the vulnerability to the province. Clumsily maybe, but I honestly believe they tried. I don’t buy the story that the province conveniently happened to discover the breach because someone else noticed the exploit a few weeks later. The system had been in place for over a year and a half, so the timing is suspect at best.

I believe the province failed in their responsibility to protect the data and is now railroading Mr. Big to cover it up.

Since the system is literally designed to serve public documents, the solution to this problem is likely to be costly. It’s easier for the department to blame someone than take responsibility.

In Conclusion

The use of the “Unauthorised Access” statute given the events that appear to have occurred is appalling. The province’s strategy so far has been to cover this up, and when they couldn’t keep it under wraps, bust down some kids door, interrogate him and seize his computers. The charges grossly outweigh the alleged offence, and arguably there was no offence.

I’m disgusted with both HRP and with the crown prosecutors office, for this display of Americanized justice.

If this kid broke the law, so did Google, let alone the giant issue this creates for the information security industry. If discovering a vulnerability can open you up to the same legal liability as manufacturing child pornography, suffice it to say that nothing will ever get disclosed again. Most people aren’t about to risk 10 years in prison to let the province or anyone else know somethings broken. This is generally recognized as a bad thing, weakening security across the board.

Putting confidential documents on a server designed to serve said documents to the public shows a clear lack of judgement, training, and understanding of the software and processes at hand. I think it’s abundantly clear that the blame lies at the feet of the province.

H2FPTF Hackers and green digital computer writing


Halifax: we have a problem

sdfsdfsdfsdf By evan on Aug 27, 2017

I was curious about something the other day. Little did I know how far the rabbit hole would go. I was mostly wondering about Halifax vs “Other places I’d consider living” so left out Calgary, Vancouver, and a few others. They still came in above Halifax.

To be clear, I’m not looking at leaving immediately. That being said, I have no doubt that the next time I’m looking for a job, it won’t be here.

Using data from Numbeo and Glassdoor I compared “Senior Software Developer” salaries vs cost of living for a bunch of major cities in Canada. We’re number two from the bottom.

It turns out one of the best places to live as a software dev is Sydney. It may seem surprising, but they have a ridiculously cheap cost of living (you can literally buy a house for under 50k; see here, here, and here.) Though their salaries are amongst the lowest… proportionally it’s on par with Kitchener-Waterloo. Also they have FTTH. If you’re working from home you’re one of the most well situated devs in the country.

The best overall by a wide margin is Ottawa. I haven’t dug into why that’s the case yet. I assume Government jobs, but I know they do have their share of tech businesses (shopify, etc.)

The equivalent salary column is most damning. You’d have to make slightly more in Toronto and St Johns to come out even. It’s an instant raise to move to any other city.

Sure Toronto’s rent’s higher. But when we’re paying more in taxes, in utilities, on food, and everything else.. it’s a 5% difference overall. It’s actually cheaper to live in our nations capital.  Even taking into account the cost of flying the family home every few months.. it’s a massive difference in disposable income (on average) to move. I have to assume this is causing massive damage to the local tech sector.

To be fair, tech salaries have come up in Halifax over the past few years. I know one person who specifically cited that as a reason for staying. I’m sure there are a lot of reasons for the overall increase, I tend to attribute a lot of it to an implied cap on tech startup salaries that was dealt with a couple years ago.

Had I run these numbers 10 years ago, I’d have moved to Ontario in a heartbeat. Had I run them 5 years ago, I’d have stayed there. How many people already did? We hear about people moving “out west” to the oil fields. The new buzzphrase is “Data is the new oil” and I have a feeling nothing’s changed.

I used to joke that our largest export is young people. It’s not a joke anymore. I hear about the “tech labour shortage” all the time. No one can find people. Recruitment is impossible. Well, I think the issue is clear. There’s two ways to solve it: Drastically cut taxes and utility costs (ha!), or increase salaries by 10 to 20%. The latter happens to be a solution to at least 1/4 of the issues raised by the Ivany Report.

Ran the same numbers for the “average permanent hourly wages” from statscan for the same list of cities. That’s a reasonable proxy for “an average full time job” Halifax is actually at the very bottom. The rest of the list doesn’t change that much, other than Sydney going to the top. That’s the cheap housing again.

Let me know if you have comments or better sources for the data I used. I’m just presenting it as is, if there’s better data I’d love to use it.

Assigning Blame Accurately

sdfsdfsdfsdf By evan on May 09, 2017

As a followup to the last articles; CBC has today published a new take on the security camera incident at a Cape Breton School last week.

“We are actually going to be sending letters and reaching out the manufacturers in the very near future,” said Jennifer Rees-Jones, a senior advisor at the Office of the Privacy Commissioner of Canada.

The office wants all manufacturers to make devices that require users to change the default password when they plug the surveillance camera in. It also said the boxes the cameras come in should have strongly worded warnings about the privacy risks if the device is not secure.

These simple steps would make Canada a world leader in IoT security. They’re not without precedent though; in March of this year, a California Senator introduced a cyber-security bill. 

As WCSR reported just last month; the bill would require manufactures to design devices in such a way that they will

– … indicate to the consumer when it is collecting information
– obtain consumer consent (presumably through some form of user interface) before the device collects or transmits information

CBC spoke with experts again;

“Some of them have very strong security. Some have no security at all. Some have very weak and hackable security settings,” said Robert Currie, director of the Law and Technology Institute at the Schulich School of Law at Dalhousie University.

Tom Redford of Wilson’s Security in Dartmouth said … “If it’s just left at factory default, you’re leaving yourself susceptible to being hacked,” he said.

The default is transmit, with no password, and no authentication. It’s working as designed.  To call it a “hack” implies those viewing the public feed are at fault.

Redford suspects a lack of passwords may be to blame.

The lack of passwords is an issue, and was certainly relevant. The question is why were there no passwords? The user manual for the device in question recommends setting a password and protecting the video feed.

If the device had defaulted to password protected, as the Office of the Privacy Commissioner of Canada requested in 2015; this may not have been an issue.

Nova Scotia’s privacy commissioner and the Cape Breton-Victoria Regional School Board have launched investigations into how the security camera was left open to the internet.

School officials have not revealed the results of their inquiry, but are calling it an “isolated incident.”

From discussions with another school board; It appears likely that a hole was explicitly opened in the schools firewall to allow it through. That would imply there was a conscious decision to make the cameras available publicly.

I strongly recommend reporters dig a bit deeper on this issue. For example;

  • Who requested the cameras?
  • For what purpose?
  • Who requested they be available publicly?
  • Did the IT department read the manual, and make appropriate recommendations?
  • If they did, were they overruled, and if so, by whom?
  • Who’s responsibility is the security of the devices attached to the network?

The Russians are Coming

sdfsdfsdfsdf By evan on May 08, 2017

Last week several webcams at a Cape Breton school were discovered to be broadcasting publicly, and indexed by a search engine, at The camera was taken down after the incident was reported; but CBC’s coverage by Susan Bradley and Jack Julian leaves much to be desired.

(Privacy Commissioner) Tully said passwords need to be encrypted and the length of time images are kept should be limited so they are less likely to be accessed.

The recommended practice is to hash passwords not encrypt them. That being said this advice is inapplicable to the issue at hand; and doesn’t address the issue that the device was using the default password, as evidenced by the screenshot that says change password.

Charlene Chaisson, a parent of two children at thea school, spoke to CBC:

“All I can add is that although it’s my son in the image and it’s alarming, I don’t blame anybody for it happening. Things get hacked all the time and hopefully now the cameras are secure.”

Nothing was “hacked.” One can find default passwords for most devices online in aggregated list of passwords, or in the user manual for the device. In fact, the manual for the camera in question details how to password protect the camera feed, and explicitly says to set a password for security reasons.

CBC did talk with a cybersecurity expert:

Daniel Tobok, a cybersecurity expert in Toronto, said the problem of webcam images being streamed around the world is common…

… He blames the way the webcams are connected directly to the internet.

With the ever-impending migration to IPv6, more and more devices will be connected to the internet. The issue is not connecting devices to the internet. IP security cameras are designed to connect to the internet, and restricting it with a firewall, would have only resolved this issue by rendering the device inoperable at best, or limiting the exposure to the school’s own network at worst.

Not once in the article did the CBC or the cyber-security expert mention ‘changing the default password’ on devices, which in 2014 Infoworld ranked one of the Top 10 Colossal Security Mistakes.’s FAQ does tell users both how to fix their camera, and provide users with information on removing their camera from their site.

Q: How to remove my camera from this site
A: If you want to leave your surveillance camera public accessible but want to remove it from this site send the URL of your camera to email from contacts section. But remember that your camera still will be available to all internet users that use surveillance camera search software and sites like .The only solution to make your camera private is to set up a password!

As of May 4, it was longer linking to the Cape Breton camera.

Kris Klein, a privacy lawyer in Toronto, had this to say:

You don’t know who was looking at them,” Klein said. “It’s not to say that they were necessarily doing anything wrong, it’s just the fact that they had their own personal image broadcast and made available to the public at large via these shady characters.

Appeal to emotion aside, the feed was available only because the school board’s own IT staff didn’t read the manual. With accurate reporting of the issue, one realises the school board employees that set up the camera, are the “shady characters” whom made the broadcast public.

Trusting people on the internet.

sdfsdfsdfsdf By evan on May 29, 2016

An issue;

Who do you trust on the internet? It’s a simple question, with a horrendously complex answer.

Some of the key underpinnings of the internet like DNS and Certificate Authorities are trust based. You believe that isn’t impersonating the real google because GeoTrust vouched for them. They believe Google because someone submitted a request from (yes that’s a bit of an oversimplification but accurate in most cases)

You can’t really trust DNS, at least, without something like DNSSEC. It also leaks a lot of information, like a list of every single website you’ve ever visited. Worst case scenario someone could man-in-the-middle a Certificate Authorities DNS provider and get certs for anything by ‘proving’ that they’re really In short, yes, the CA’s that you trust to vouch for people themselves trust DNS, something with no encryption or validation or verification in any way shape or form.


This would be fine if all the big players were trustworthy. You can trust the CA because a vendor vouched for them (or their friends); And of course, vendors are always trustworthy. Yes, that is sarcasm.

Faith might be a better word than trust.

After the BlueCoat news, I started thinking about how the internet would look without faith. That is, instead of the implicit belief that any given agent/actor/person is trustworthy, simply have them prove whatever it is you want to know. There’s also the pesky issue of expiry dates being abused for revenue generation, but I digress.

A good place to start thinking about this is Tor hidden services. It’s an entire encrypted network, without any external DNS provider, without any external CA, but it still allows you to prove you’re talking to someone and that you’re talking to the same someone every time. The problem is that Tor is anonymous by design which is awesome if you want to buy drugs on the internet (that’s a fair statement, if incomplete), but for any tangible transactions I need to *prove* who the other person is and hidden services explicitly prevent that (unless you’re at https://facebookcorewwwi.onion which is a whole other issue.) Tor also has root servers, though very resilient, are still root servers.

The end goal is really having verifiable evidence, or proof that is at, and proof that is actually ‘Alphabet, Inc.’ In the current faith-based model, DNSSEC provides the former, and Certificate Authorities provide the latter. 

A solution.

The question becomes, how do we do that? The answer is blockchains, and it’s very simple. Every block is a DNS record at a point in time, contains a public key for https (this would be akin to HPKP), and is signed by the organization with the same keypair. When you look up a DNS record (a block), you search from most recent to oldest and stop when you get a hit. Every DNS record contains a hash of the one before it, making it infeasible to falsify any records.

In short, the bastard lovechild of DANE and namecoin.

The only real downside of this is losing vanity domains. I’m not sure that’s a real problem, given the recent explosion of tld’s and the massive abuse of domain resellers. Browsers are starting to remove the URL from the address bar anyways. The obvious loss is writing domains down on business cards, but that’s a solved problem with QR codes. It has no bearing at all on links or bookmarks. There’s always the potential for adding an alias, but that becomes subject to cybersquatting. But then, that’s no worse than what we have now.

In addition to the privacy afforded by local DNS lookups, we also effectively have local X.509 Certificates.  A client doesn’t need to ask for a cert in the clear, it already has one. That is actually a thing and it has been used to track down hidden service operators on the public internet, when they share a keypair. We can skip the first two parts of the SSL handshake, and go right to key exchange.

This becomes figuratively impossible to spoof. There are simply no records over the wire to MITM,  every record is verifiable, and the client simply can’t connect to the server without the right record. Zooko’s triangle aside, this is nearly a holy grail.

This is also impossible to tamper with, at least, without the right keypair. Politically driven domain seizures would be a thing of the past.

The only element of trust left, is on a site by site basis, that is, do you trust the operator? That’s simply not a technical issue.

Skipping the SSL handshake breaks SNI, so we need to address that as well. SNI does have other issues, like the unintended side effect of allowing a third party to see what website you’re requesting. Since this system values authenticity AND privacy, that needs dealt with. The best method I’ve come up with (and this could also be applied to the internet as it exists now) is to store an SNI public key in the DNS record. The client then encrypts the domain it’s requesting with that key, and a salt. The salt is important otherwise there’s still a 1:1 relationship between the raw domain and it’s encrypted equivalent. This solves the privacy issues with SNI while also ensuring compatibility with alternative DNS schemes.

Open to comments or thoughts, this would be a fairly radical change to the basic underpinnings of the internet, but a necessary one in my opinion.


Makerspace Survey Results

sdfsdfsdfsdf By evan on Mar 18, 2016

I put together a survey and sent it to the HMS mailing list, reddit, linkedin, etc.

As of writing there are 81 responses, one of which is an obvious troll. The raw data is available here 

TL;DR: a space will never work for *everyone* but it can work for a lot of people.

Some key points:

  • Tech is far more popular than fine arts
  • 70% want there to be about 1000sqft, 80% want 1500sqft. The other 20% want more.
  • Dartmouth requires less people (~40) than Halifax or Burnside do to be sustainable (~53)
  • 70% of respondents have a car
  • 83% of respondents are over 22 years old (most students are <22)
  • There’s a clear willingness to pay extra for perks (like a locker)
  • People are interested in drop-in fees, like a gym
  • 9-5 Monday to Friday is not entirely necessary (and thus creates opportunities for shared space with business)
  • 20% of respondents want studio space, but want it at unsustainable rates.

The Actual Space

The first few questions were designed to find out what people wanted to use a space for, and how much space they think is needed. There’s a reason I didn’t specifically tie size to location, which I’ll get back to in a bit. What people think is needed and what is actually needed are two entirely different things. The former I can find out with a survey. The latter depends what people want to use it for.

Planned use
72% of 81 respondents want 3D printing and/or CNC. We’ll call that ‘Automated Making’ for lack of a better phrase.
65% want Soldering and Electronics.
58% want woodworking
44% Welding / Metalworking

on the other end of the scale,
17% want fine art and textiles,
8% want ‘other’, which a few included sewing.

How Often?
38% would use a space weekly,
33% biweekly,
23% monthly.

Just a few would use it daily.

Time of day
90.1% want access on the weekend during the day,
81.5% want access through the week in the evenings.

Compare that with
34.6% who want access 9-5 Monday to Friday.

This implies a business partnership or colocation could work.

Studio Space
One question that a board member had me add was ‘would you pay for studio space?’

21% of the 43 people that responded said yes. Of those, about half said what they would pay and for what, and answers varied wildly, but were around $0.50 to $1 per square foot (or about 5-10% of actual commercial rents)


50% of respondents would pay $10 a month for a locker;
16% would pay $20;
8.6% would expect one to be included.

43.2% of respondents would pay $25/month
33.3% would pay $50/month
6.2% would pay $100/month

Many of the write-in responses were $10/visit. Based on these numbers, and on the use numbers, tiered access is feasible. I would say modelling it after gym memberships isn’t a horrible idea.

Assuming people were charged exactly that; we can consider the average base income per person to be $41.90.


These questions were added early on and didn’t capture the first few respondents.

81.4% of respondents were male, 17.4% female, 1.4% were other.

Their age ranges are
15% 16-22,
55% 22-35,
28% 35-65%
0% 65+

This is entirely unsurprising as most of the 22-35 crowd are (statistically) living in apartments and have no space to work on projects or hobbies.

Only 10% of respondents have children that would participate. Of those, 7.5% are in elementary school.

50% of respondents have a university degree.
25% have a college or trades diploma
19.7% have some post secondary.



74% of respondents have a car.
18% of respondents rely on the bus.

The location was asked in the way ‘where are you willing to go?’

81% would travel to the peninsula
58% would travel to Dartmouth
53% would travel to Burnside.
49% would travel to Clayton Park
46% would travel to Bayers Lake

Sackville, Spryfield, and Cole Harbour are all around 20%.

It definitely seems to follow commute patterns, and is clustered. If someone will go to Cole harbour they’ll almost always go to Dartmouth, and if someone will go to Bayers lake they’ll almost always go to Clayton Park. People without cars generally choose the peninsula and Dartmouth. Nobody without a car chose Burnside.

Looking at only those who rely on transit, 65% are willing to go to Dartmouth.

I will admit, the data does show a measurable preference for the peninsula. Now the question is, is it feasible?

I’ll look at the numbers to break even on gross space costs here, suffice to say that’s only part of the story and doesn’t include internet, insurance, power, etc. This chart makes many assumptions but is a good starting point. It’s based on the number of people who *want* a makerspace, if 100 want one, and 19 of those wouldn’t go to halifax, I’m weighting the results by that percentage. As before, the average revenue per person was $41.90.

In short, it’s the bare minimum threshold at each size to have a viable makerspace.

500sqft / 5% 750sqft / 29% 1000sqft / 69% 1500sqft / 80% 2000sqft / 100%
Peninsula (~$22) 81% 916 / 27 people  1375 / 41 people  1833 / 54 people  2750 / 81 people  3666 / 108 people
Dartmouth (~$15) 58%  625 / 25 people  937 / 30 people  1250 / 41 people  1875 / 62 people  2500 / 82 people
Burnside (~$14) 53%  583 / 26 people  875 / 40 people  1167 / 52 people  1750 / 79 people  2333 / 105 people


Interestingly enough, even after weighting for ‘who would go there’, Dartmouth is feasible with less numbers.

Now, no one wanted a 500sqft or a 750sqft makerspace. Based on the intended usage; 1000sqft is probably fine and more is better as it grows.

Based on these numbers, the same space would need 80% as many people to be sustainable in Dartmouth as it would on the peninsula or Burnside. Realistically, many of those that are willing to travel to Burnside are probably also willing to go to north end Dartmouth, especially along windmill and near the new bridge. To be fair, Halifax Peninsula is more desirable, but only really once we’re over about 80 paying members and only if a perfect spot were to come up, with parking for the 75% with cars, and substantial floor space.



The other part of this survey was about retail sales, looking at what people would be willing to spend on components / as markup.

Having quality parts nearby was more important than price, and in fact people are willing to accept a reasonable markup from as high as 100% on smaller items to 25% on more expensive items. There is definitely an opportunity for a space to derive revenue from component and supplies sales. 94% of people were willing to go pick up parts, be it at a makerspace or elsewhere. More people buy things locally than order overseas, and nearly as much as ordering domestically. 23% of people do mail order purchases.

A question that *should* have been asked, “how much do you generally spend on components and raw materials each month” unfortunately wasn’t, so I can’t derive an estimated revenue.

RIP Twitter; 2006 – 2016

sdfsdfsdfsdf By evan on Feb 10, 2016

It has been reported that Oceania Twitter has appointed a Ministry of truth trust and safety council.

As if that’s not ungood bad enough, it’s spearheaded by none other than Anita Sarkeesian, of Feminist Frequency, well known for her fair and impartial views utter bullshit.

Jack must be insane.

Twitter will be tumblr with the year. A psychotic echo chamber where trigger alerts and perceived offense trump reality. Where mental disorders foxkin are celebrated. Where objectivity and reason are unallowed strongly discouraged.

At the risk of invoking Godwin’s law, I’ll leave it on this note.


Factoring large semi-primes

sdfsdfsdfsdf By evan on Jan 31, 2016

tl;dr a friend of mine mentioned this problem over beers, and neglected to tell me that it’s considered impossible (thanks). Me being me, I gave it a shot, and appear to have devised a legitimate attack on large semi-prime that is not dependent on their size.

Long story short, it’s easy to multiply x * y, but harder to get x and y from back out of the answer. At it’s most basic level, 7 * 5 = 35. Conversely 35 = x * y, solve for y. There can only be one valid answer if both factors are primes, and when both factors are primes the product is called a ‘semiprime

It’s asymptotic complexity. In short, it sucks to brute force. It’s easy for a 2 digit number, not so much for a 200 digit number. That ‘s why no one uses 2 bit RSA keys. This mathematical asymmetry  is the entire basis of RSA encryption.

Before we continue, let’s declare a few variables. p is the first prime, q is the second prime, pq is their product (and the semi-prime).

The most obvious speed-up is limiting the search space.

  • The p and q will never be even numbers.
  • p will always be greater than the square root of pq, and q will always be smaller.

At this point, the search gets a bit smaller. But it’s still huge. Taking some time to refactor the formula (and I skipped a step or two, I’ve lost my original notes though will try to recreate them at some point)

x = sqrt(pq+(n^2))
p = x – n
q = x + n 

We can simply increment x until p*q == pq.

A few examples;

x = sqrt(35+(1^2)) = 6
p = 6 – 1 = 5
q = 6 + 1 =  7
Then confirm the results.. (5*7 == 35) and we’re done!

x = sqrt(77 + (1^2)) // is not an integer. Stop.
x = sqrt(77+(2^2)) = 9
p = 9 – 2 = 7
q = 9 + 2 =  11
Then confirm the results.. (7*11 == 77) and we’re done!

We’re at this point effectively looping through half of the difference of the primes, which is a lot smaller space than every number ever. This algorithm can factor a 50 digit prime in milliseconds on a netbook. This algorithm is fastest when the numbers are closest together.

Since we know both x and n are integers, we can do the search the other way, as x changes slower than n, by reversing it we get a speedup.

We can rearrange this as

x = ceil(sqrt(pq))
n = sqrt(x^2 – pq)
n = sqrt(6^2 – pq)
n = sqrt(36-35)
n = sqrt(1)
n = 1, x = 6.

p = 6+/- 1 = 5, 7.


Obviously the bigger the numbers are, the greater the difference between them. Ironically, it’s recommended to use values of p and q which are relatively close together. Thus this may be a valid attack against an RSA key

It’s non-deterministic, and can be parallelized (give each machine a 100,000,000,000,000,000 digit range to search

The only further speedup I see is if there’s a way to solve for the next integer value of n. The limiting factor here is that it’s effectively approaching a linear search as x approaches infinity. It’s extremely fast until the slope straightens out.

I’ve yet to come up with something so throwing this algorithm out there to see if anyone else can take it that last step.


UPDATE: Looking into it more, it’s a massive speedup in a small field, but not a large field, as it approaches a linear search of the difference between primes as x approaches infinity. This is fine for a 50 digit semiprime, but not a 200.

Tor Rate Limiting

sdfsdfsdfsdf By evan on Jan 31, 2016

If you know much about Tor, you know that all connections come from localhost. Even though it’s old news (I first heard about this a year ago) it has come up in the news recently.

It reminded me of a proof of concept I wrote for rate limiting hidden services, or alternatively, any service where you can’t distinguish users. Basically, you have them prove they did some amount of work (and therefore spent a certain amount of time between requests)

Factoring a semiprime, for example. It’s slow, which is why it is the basis of RSA encryption. More on that in the near future 😉

Full source at github

Update (Feb 15): There’s now another version of this concept available, which operates more similarly to bitcoin.


sdfsdfsdfsdf By evan on Mar 13, 2015

Was trying out a few XMPP servers recently. Prosody was the easiest to set up by far.

Five min, stop to finish, using the same certs as my Apache server.