Hacker Newsnew | past | comments | ask | show | jobs | submit | gigantor's commentslogin

The fact that this useful and educational tool exists simply highlights the fact that the Venture Capitalism model of today has become so ultra-saturated, that it almost comically suggests anyone on the street can/should get in on it.

VC's and cryptocurrency feel as if they have now finished their convergence to be nothing more than the latest sophisticated and legal forms of ponzi schemes where those at the bottom of the pyramid are holding the bag. While not knocking the usefulness of this article and accompanying tool, it just shows how far we've strayed from putting the highlight and focus on builders and visionaries that these funds must have to latch on to.


The states problem of prosecuting ponzi schemes has been publicly enabling everyone else to know exactly which pieces were and aren’t illegal

Which disclosure paragraph was missing in the 80 pages of disclosures

Which notice wasn't filed with a securities regulator

And then simply doing the exact same actions compliantly

There is only so much the state can do for investors that want to move money


> highlights the fact that the Venture Capitalism model of today has become so ultra-saturated, that it almost comically suggests anyone on the street can/should get in on it.

If you use twitter (don't recommend) you'll see this opinion is widely held. People self-describe as "VCs" with funds of less than a million and/or who provide "funding" of less than 20K. Crazy times!


This is likely an unpopular opinion, but Venture Capital needs to disappear and be replaced with a far better model.

While VC served its purpose well back in the heydays of FAANG and modern groundbreaking companies, in today's area it has become nothing more than a legal ponzi scheme designed to have series D and IPO stock purchasers holding the bag after artificially growing companies to unsustainable rates. Companies that grow slower and have a chance to become more sustainable will benefit even the most capitalist of societies, instead of hundreds of billions dumped into ongoing pump and dump cycles, which (as the article) inevitably leads to a crash.


You meant Felix Dennis, but to your original point, "How to get Rich" (and its follow up "The Narrow Road") drive home the same point. Maintain full control at all times. That includes term sheets with clauses that they will bring on an "independent" board member to settle performance-related issues (but really, they represent the investors). Fight nail and tooth for every equity point you're giving out. For a good case study, see Mark Zuckerberg, who would have been voted long ago if he did aggressively maintain control.


There is no doubting the legacy of Objective-C (especially given the high likelihood you are reading this post on a mobile device, using app written in Objective-C), but to truly appreciate Brad's legacy, am curious about the appeal of using Objective-C.

Having developed only one small iOS app with Objective-C code, I was mostly turned off by its overall verbosity in the context of NS prefixes. Hence, I ask the question on behalf myself and others who did not appreciate the language and did not give it a proper chance... what did I miss and what are its top appeals?

Nevertheless, Rest In Peace to a pioneer.


In the context of the time, C++ didn't exist yet. Objective-C was actually introduced just prior to C++, and both languages were effectively solving the same problem in different ways: C was the dominant language, and both language designers were trying to graft the OOP paradigm onto it.

Objective-C is a thin-layer on top of C, adding Smalltalk-inspired object support. That's pretty much all there is to it. C, with some new syntax for objects. In the context of a world where C is the norm, that's pretty appealing. This is before Java existed, too.

The "NSWhatever" stuff, as far as I'm aware, isn't part of the language. That's all in the frameworks Apple/NEXT developed for Objective-C. (Note that the base object is called Object, not NSObject, and the integer class is Integer.) NSString is probably named that way because Objective-C doesn't include a string class (nor does C, as a string is just an array of bytes until you write a wrapper to get fancy) and NEXT made one. They were just namespacing the NEXTStep classes.


> Note that the base object is called Object, not NSObject, and the integer class is Integer.

Objective-C actually doesn't require a base object (although these days it essentially does), but Object and NSObject are both examples of root objects. IIRC, reference counting is not in Object and was a NeXT invention.


I'm curious--what happened to Objective-C in that fight with C++? Why didn't people go for its simplicity?


As usual, platform languages win.

C++ was born at Bell Labs and quickly integrated into their workflows as C with Classes started to get adopters.

This raised the interest of the C compiler vendors, so by the early 90's, all major C compiler vendors were bundling a C++ compiler with them.

Additionally, Bjarne got convinced that C++ should follow the same path as C and be managed by ISO, so the C++ARM book was written, which is basically the first non-official standard of the language.

So C++ had ISO, the same birthplace as C and love of C compiler vendors, while Objective-C was initial a work from a small company and later on owned by NeXT.

So, naturally Apple, Microsoft, IBM decided to go with C++, and everyone else followed.

Here is an anecdote for Apple fans, Mac OS was written in Object Pascal + Assembly, when market pressure came adopt C and C++, the MPW was born and eventually a C++ framework that mimic the Object Pascal one (there is a longer story here though, ending with PowerPlant framework).

Copland was based on a C++ framework, and Dylan team eventually lost the internal competition to the C++ team regarding the Newton OS.

Apple was one of the major OS vendors that never cared much about C for OS development, only the NeXT acquisition ended changing it. And even then they weren't sure about C and Objective-C, hence the Java Bridge during the first versions.


> Apple was one of the major OS vendors that never cared much about C for OS development

It's true that MacOS Classic kept providing Pascal headers for most of its APIs for a long time (I don't recall whether they ever stopped), but internally, they started switching to C by the late 1980s (as an external developer, I could tell by one bug which would never have made it through a Pascal compiler, but was typical for the kind of bugs that wouldn't get caught by a K&R C compiler), and by the late 1990, it was all C and C++, just with Pascal calling conventions for all public facing APIs. In my time at Apple, I never encountered a single line of Pascal code.


I bet it was actually C++ with extern "C", which was my point, specially given the MPW and PowerPlant frameworks.

I never knew anyone doing bare bones C on classic Mac.


There was a lot of extern "C" (and there still is a lot of that), but there also was a lot of extern "Pascal" back then, I seem to recall.

There was a quite a bit of regular C in classic MacOS, though there was also a good deal of C++. You're right that MacApp (Which I think is what you're referring to with "MPW", which was an IDE) and PowerPlant were written in C++, but I'm not talking about the clients of the MacOS APIs, but about the implementations of those APIs.


Faire enough, but sure it wasn't C++'s C subset?


In 68K times, the two compilers were quite different, the C++ was CFront at one point, and insanely slow. It would have taken a real masochist to compile C with a C++ compiler. I can't guarantee that nobody at Apple ever did that, but the suffixes were distinct. In that situation, IDEs usually make the distinction automatically, and it's not hard to write Makefiles to invoke the right compiler.


Ok, thanks for the overview.


I tried out both Objective C and C++ in 1988, when neither were popular though C++ was more talked about.

What I remember was that with Objective C you needed to track all intermediate values and release them, so you couldn’t write an expression like [[objectA someMessage] anotherMessage] - you had to capture the intermediate in a variable so you could release it at the end.

So this was annoying and I didn’t like Objective C at the time. (25 years later I wrote several iOS apps in it)

C++ let you manage memory and temporary values though constructors and destructors, which was much more appealing, though pre-templates it was quite constrained.


Part of it was licensing. Probably more of it was the personalities involved at e.g. Microsoft or SGI.


Objective C loses to C++ for performance if you really start exploiting OOP a lot. The fact the you can swizzle methods in ObjC says a lot about the "weight" of the underlying implementation ("it's all messages") compared to C++.


The fact that you can swizzle methods also says a lot about its power and flexibility. When Steve Jobs was at NeXT, he was quoted numerous times bashing C++ as having 'dead objects' while pointing out that ObjC objects were 'alive'. One seldom needs to make use of swizzling, but when you do need it, it's an awesome capability.

As prabhatjha pointed out in another comment in this thread, swizzling was used to automatically capture networking calls just by adding our framework to your app and initializing it. You could then log into our app's web-based dashboard and see stats about networking calls (counts, latency, errors, etc.). This simple and elegant solution would not have been possible with C++. We also supported Android at the time (Java), and the developer was required to change his code to call our networking wrapper calls to get the same tracking for their Android apps.


Absolutely. I've used swizzling myself to fix issues with audio plugin's GUIs (to limit how fast they are allowed to redraw themselves). It's very clever and sometimes very useful.

But the ability to do that comes with certain costs, and performance is one of them. The fact that these "methods" are dynamically dispatched sometimes matters, and you can't change that any more than you can swizzle in C++.


> This simple and elegant solution would not have been possible with C++

Actually, it would, but not via any feature of the language. You can use features of the linker to accomplish this (particularly LD_PRELOAD). It's not the same thing, really, but it felt worth mentioning for the record.


In our case, the LD_PRELOAD approach would not have worked because this was on iOS devices where you can't set that variable. However, I do appreciate you mentioning it because it too is a powerful mechanism that enables some creative and non-invasive solutions in some cases.



> Having developed only one small iOS app with Objective-C code, I was mostly turned off by its overall verbosity in the context of NS prefixes.

This is actually a blessing because NS-/name prefixes are a simple approach to naming that keeps you humble. If you let programmers have namespacing they will invent enterprise software frameworks where every class is six layers deep in a namespace of random tech buzzwords they thought up.

> Hence, I ask the question on behalf myself and others who did not appreciate the language and did not give it a proper chance... what did I miss and what are its top appeals?

It implements message-based programming, which is "real" OOP and more powerful than something like C++, where OOP just means function calls where the first parameter goes to the left of the function name instead of the right.

In particular it implements this pattern: https://wiki.c2.com/?AlternateHardAndSoftLayers which is great for UI programming and lets you define the UI in data rather than code. Although iOS programmers seem to like doing it in code anyway.


This plague of object wiring in code is pervasive in the Java world as well. The joy of declarative late binding and decoupled objects at compile time seems to be very lost on the vast majority of programmers.


> It implements message-based programming, which is "real" OOP

No, it's message-based programming, which is a very powerful and useful tool. It's not the one true inheritor of the fundamental OOP concept.

OOP wasn't defined by "you send messages to objects", it was defined by the idea that objects had their own semantics which in turn constrained/defined the things you could do with them. Some OOP languages implemented "doing something to an object" as "send it a message"; some didn't.

ObjC is in the former group; C++ is in the latter.


Well, considering that Alan Kay coined the term...



Alan Kay didn't invent object oriented programming though he was instrumental in expanding its scope and usage. Simula 67 was a big influence on Kay's work, and as he has noted, the "revelation" that "it's all messages" was a huge one.

But Smalltalk is just one OOP language, not the only one and not even the original one (though we could argue about how much Simula was or was not OOP, and I'd rather not).

The message from Kay you cite about is strictly about Smalltalk & Squeak:

> The big idea is "messaging" - that is what the kernal of Smalltalk/Squeak is all about

He doesn't say "what OOP is all about".


Actually he did invent Object Oriented programming.

Simula was an inspiration, but was never considered as object oriented. After Kay came up with the concept, Simula was identified as part of the historical background.

“I invented the term object oriented, and I can tell you that C++ wasn't what I had in mind.” —Alan Kay.


Dahl & Nygard would not agree with you:

https://www.sciencedirect.com/science/article/pii/S089054011...

http://kristennygaard.org/FORSKNINGSDOK_MAPPE/F_OO_start.htm...

Kay came up with the term "object oriented programming", but he has made it very clear that what he had in mind has little relationship to what most people mean by that term today.

If you want to give Kay veto power over the correct application of the term, be my guest but please be consistent across all other cases where a word or phrase changes its meaning over time.

Contemporary OOP pays only lip service to Kay's ideas (something he would be the first to say), and is only tangentially influenced by Smalltalk at this point (Objective C probably being one of the few widely used counterpoints to that).


Those links don’t support your claim about Dahl and Nygard.

They are retrospectives written by other people talking about their work. Not papers by Dahl and Nygard themselves.

Just because you can find some people who are making the same retrospective mistake you are, doesn’t change the history.

OOP was defined by message passing.

What you are calling ‘contemporary OOP’ is a cargo cult based on a failure to appreciate that. The problems with this are increasingly acknowledged.

If you want to say OOP is class based programming, be my guest, but your statements about the history that I responded to are simply false.


The second link is titled:

> How Object-Oriented Programming Started > > by Ole-Johan Dahl and Kristen Nygaard, > Dept. of Informatics, University of Oslo

The first link is "based on an invited talk" given by the author at the Dept. of Informatics, University of Oslo, with colleagues of (then-deceased) Dahl and Nygaard in attendance. That doesn't guarantee anything in particular, but it makes it likely that they are not making stuff out of thin air.

If OOP was defined by message passing, why was it necessary for Kay to note in 1998 that people had apparently lost sight of this (his) definition?

If you want to say that C++ programming is not OOP, be my guest, but your statements about the history of OOP are not part of some canon or bible.


Ok - I accept that the first piece was by Dahl and Nygaard.

Nevertheless it is just a retrospective application of a term that they didn’t invent, and doesn’t change the history or substantiate the false claim that they invented the term.

OOP was defined by message passing and as you say people lost sight of the idea, largely due to C++

Kay did invent the term. It was about message passing.

Later people used it to describe something else which has little resemblance to the original idea.

It’s fair to say that it’s not part of some canon and of course people are free to miss the point of something and cause a second definition of a word to enter circulation.

Irrespective of canon or multiple definitions, your statements about the history itself in your earlier comments are just false.


Objective-C is a very simple, clean language–very much unlike its other "object-oriented-C competitor" C++. Unlike C++ it's a 100% superset of C, and it takes its cues from Smalltalk where objects send messages to each other rather than statically call each other's procedures. To support this, there is a very rich runtime that allows all sorts of reflection and metaprogamming atypical in a compiled language.


Tastes may differ. To me, C++ looks like an organic extension of C syntax, while Objective C looks like an alien graft on top of C.

Same with semantics: In C++ there is a continuum from POD structs to adding non-virtual methods to adding virtual methods. In Objective C there is a gaping chasm between C types and Objective C types, and weirdness occurs when you mix the two (e.g. pass a method taking an (int) to a place expecting a method taking an (NSNumber *)).

Containers (arrays and dictionaries) in Objective C, I find particularly ugly, especially in earlier (pre-2010 or so) versions of Objective C. They can contain only Objective C objects, not C objects, but can wildly mix and match objects of different types (this has been helped by Objective C generics by now). Access to elements is very verbose (this has been helped by syntactic sugar by now).

Just recently, I had to review Objective C code using a multidimensional numeric array. Even in modern syntax, it was no joy to read, and I wept for the senselessly murdered memory and CPU time. But if it had been written in pre-2010 Objective C, I might have lost my will to live for weeks.


I don't see how you can call ObjC any more of a superset of C than C++.

Object-related syntax in ObjC is completely alien to C. Object-related syntax in C++ (mostly) extends C structure syntax.

Yes, ObjC takes its cues from Smalltalk. C++ does not. And so... ?

[EDIT: ok, so people want to interpret "superset" as meaning "every valid C program is a valid Objective C program too. This is, with very few exceptions, true of C++ as well ]


Because ObjC is a strict superset of C in the technical sense. That is: every valid C program is also a valid Objective-C program.

Of course idiomatic ObjC is heavily tilted toward the non-C parts of the language (OOP features), but that doesn’t mean it’s not a true superset of C.


"Yes! C++ is nearly exactly a superset of Standard C95 (C90 and the 1995 Amendment 1). With very few exceptions, every valid C95 program is also a valid C++ program with the same meaning."

https://isocpp.org/wiki/faq/c


Objective-C has no exceptions.


Whaddya mean?

   @try {
        // do something that might throw an exception
    }
    @catch (NSException *exception) {
        // deal with the exception
    }
    @finally {
        // optional block of clean-up code
        // executed whether or not an exception occurred
    }
https://developer.apple.com/library/archive/documentation/Co...

(I'll see myself out. For at least two reasons.)


We are on C17 nowadays.


ObjC doesn't change any existing C syntax, it only adds messages. C++ is an entirely different language with a different spec that merely looks like C.


"Yes! C++ is nearly exactly a superset of Standard C95 (C90 and the 1995 Amendment 1). With very few exceptions, every valid C95 program is also a valid C++ program with the same meaning."

https://isocpp.org/wiki/faq/c


The implicit casting rules are different, it doesn't allow VLAs, you can implicitly create static constructors instead of having your program rejected for non-constants at the top level, more keywords are unavailable as variable names…


VLAs are optional since ISO C11, clang and gcc are probably the only C compilers that care to support them.

C17 also has its share of keywords and C2X plans to replace some of the _Keyword with they keyword version, as enough time has passed since their introduction.


Clang and GCC being the two largest implementations.


True, but not the only ones, so good luck making that VLA code work outside BSD/Linux clones or the few OEM vendors that have forked them.

Also Google has sponsored the work to clean Linux kernel from VLAs.


Yeah, because VLAs mostly suck.


I have some perfectly safe code using them that crashes Intel icc. Useful if you ever need to, I don't know, crash icc I guess.

Among other things, this means icc doesn't run other compilers' testsuites, because I reported the same bug in the first release of clang and clattner fixed it right away.


And given that C11 made them optional, Intel can just close such bug reports with won't fix using the ISO as justification.


Not if it claims GCC compatibility, which it does. Though I believe the frontend is licensed from EDG anyway.


It's not a modern language, so appreciating it has to be in its original context. I think it does an admirable job of augmenting C with object-oriented capabilities. It's certainly easier to master than C++.

I'm not an expert on this, but I suspect that the main reasons it was chosen for iOS were:

- The technical limitations of the original iPhone meant that you needed to use a low-level language.

- The legacy of NeXT at Apple.


Mostly the latter, I would assume. Apple didn't really use anything other than Objective-C for its application frameworks (and still generally does not, for the most part).


I read in multiple sources, usually the kind of comments that is only possible to validate with inside info, that to this day not all business units are sold on Swift.


Sold or not, it's easy to see that most of the code being written is still in Objective-C just by looking at the code that Apple ships publicly.


Objective-C is verbose not just because of the NS suffixes. Everything is verbose (by today standards anyway). ObjC is a "child" of the 1980's when verbosity was considered a merit and a norm in programming.

Two things that I used to like about it:

- Combination of static typing and at the same time pretty high level dynamic typing: it was practically possible to call any method on any object, right or wrong, just like in dynamic languages. For performance critical parts you could always resort to C. Later, as a little bonus it was also possible to resort to... C++. There was such a beast as Objective-C++.

- The method calling syntax. Quite unusual but neat. I liked it a lot.

However, Swift ruined it for me. Now that I'm a total Swift convert and I feel a 2x or even 3x boost in productivity I can't even look at Objective-C code anymore.


> ObjC is a "child" of the 1980's when verbosity was considered a merit and a norm in programming.

It's still considered a merit by some.


I agree with this. Verbose code is code you can come back to an understand years down the road.

The easiest projects for me to pick back up are the ones I wrote in Objective-C, hands down.


>Verbose code is code you can come back to an understand years down the road.

For ObjC, "verbose code" means "code you can come back to years down the road and hope there's still a manual to translate those message argument names into whatever current programming terminology uses".


Thankfully, in computer science terms like "array" and "string" still mean what they did many years ago.


It's not about "array" or string".

     [[NSNotificationCenter defaultCenter] addObserver:self
   selector:@selector(appDidBecomeActive:)
   name:NSApplicationDidBecomeActiveNotification
   object:[NSApplication sharedApplication]];
can you explain to me what any of those terms mean without looking a fairly extensive reference manual?


At that level you're talking about framework api's (in this case NextStep and derivatives), not the language itself. Drop me into some Haskell or Java or OCaml or Python or Ruby framework, I'm going to have to reach for a reference as well.

It's not uncommon to conflate Objective-C conversationally with its most common use case, but Objective-C is not NextStep and other Apple-ecosystem friends.


Entirely fair.


Sure, putting on my layman hat: "Add the observer 'self' to some centralized place get a notification of some kind for when the app becomes active. Something to do with a 'shared application' and a 'selector', maybe they are some additional context?" Of course, the latter two are things you'd know if you have used Objective-C even a little bit, with the former being a fairly standard nomenclature for a global and the latter being a crucial part of the language.


Yep.

AppKit has this concept called a Notification Center, which, duh, sends notifications. You want to observe the notification called NSApplicationDidBecomeActiveNotification. The way you want to observe this notification is by being sent the message * appDidBecomeActive:. The "object:" parameter tends to be nil, so I did actually have to look that up: it means I only want to receive this particular notification when sent by that object. It is almost certainly redundant in this case, because nobody else has any business sending NSApplicationDidBecomeActiveNotification, and it is usually precisely what you do not* want, hence it is usually nil.

Anyway:

   [prefix stringByAppendingString:suffix];
   [dictionary objectForKey:key];

In the old NeXTstep days, before our editors had code completion and other conveniences, you could very often just type a phrase describing the operation you wanted and magically the code would compile and do what you expected. Hard to both describe and probably believe if you haven't experienced it yourself.


In addition, the semantic consistency of the frameworks combined with the verbosity encouraged by the language makes it easy to jump into a new-to-you part of a 25+ year old codebase and start making useful changes very quickly.

This is much more difficult when you have to be careful about what every single operator dispatches to, or when more than just the receiver’s type determines the method that’s called. You can look at code in isolation and get a pretty good idea of its intent and the routes that its implementation will take, both of which are necessary to start making changes.


I loved the idea that the OOP world and the C worlds were syntactically different. It made the language significantly more elegant than C++, which doesn't even take into account the beauty of its Smalltalk message passing semantics.


That verbosity is exactly why I love it.

It's easy to write and easy to read (especially years later). It's just such a joy to work with.


Shameless plug: I didn't invent peanut butter, but did invent the first half-calorie peanut butter.

www.ownyourhunger.com

AMA.


Successful crowdfunding campaigns require substantial marketing efforts outside the crowdfunding platform, such as email lists, a community or customer base familiar with the brand/product/team, social media ads, etc. Exceptions exist, such as getting funded solely by being featured on the homepage or having an extraordinary product featured by the media, but those are relatively rare. Thus a campaign's success is mostly decoupled from the platform itself.

This is a great idea to avoid substantial platform usage fees in exchange for the ever reducing value proposition that Kickstarter or other centralized crowdfunding platforms provide.


Kickstarer's search is useless. Not sure what value they provide other than collecting money and a page to describe the proposal.


The Chips Helmet Audio has lasted me over 8 seasons and going strong. I don't maintain the batteries (I believe they should be stored with a 75% charge during off season) and still get enoigh charge for 2 full days of riding.


Might have to get some. Thanks!


For a paid app, the lack of basic core features (i.e. to simply view comments) indicates a lot of work needs to be done. The existing free readers out there are a bit more capable.

I ended up buying it since I'm still looking for a good HN reader out there that does all of: indenting comment threads, gives you the choice of browsing directly to the story without going through comments, and displaying key story information beside the story, such as the number of votes.


Hey there, I'm the developer of the app.

Firstly, thanks for buying the app! I wasn't personally expecting this much response, it was meant to be a read-only app, but I personally ended up missing comments as well. I understand your concern. I've good news though : we started the work on implementing comments :)


I just bought it, because I like where you are going with the design.

However, I would echo the need for comment support.

Check out how http://hn.premii.com/ does comments, I find that quite readable compared to news:yc.

The ability to log in and reply to comments would round out the use case that I used to use other apps for.



May just be a load balancing issue; as @beecr001 mentioned, if you reload the page continuously you can access the site and all chapters.


I have one new appreciation for fiat currencies - they're designed to circulate with a steady rate of inflation. It seems there's a hesitation of spending bitcoins knowing if you just wait a day it will go up, so it's being treated like a precious metal rather than a new way of paying for things.

Edit: Thanks for the correction, meant to say fiat currencies tend to 'inflate', not deflate.


Yes, but fiat currencies are designed to lose value over time, which makes it difficult for people to save effectively. Governments and large banks get the newly-inflated money first, which gives them first bite at the existing value of money with money essentially made from nothing.

Encouraging people to spend money for the sake of it sounds like a good idea when people have created the concept of 'hoarding' - which is just saving with a scary name. But future productivity has to come through capital appreciation, which has to come through saving. By working against this, wrong investment choices are made because the time horizon is altered.

The concept of fiat currencies in terms of being able to expand the money supply is superior to having a fixed money supply, but the way in which it is implemented works directly against capital formation and feeds directly into misdirected investment and speculation by allowing money creation to run ahead of sensible investment. As aptly experience by the excessive amounts of capital diverted into residential real estate, caused by excessive amounts of new money. Without the easy money, the level of investment in real estate would have been much lower, and the subsequent crash much less destructive.

Fiat currencies have been around for 250 years or so, and not one single one of them have survived that long.


"Encouraging people to spend money for the sake of it sounds like a good idea when people have created the concept of 'hoarding' - which is just saving with a scary name."

But to be clear: there's a fundamental difference between "saving" and "investment".

1. Saving/Hoarding: Keeping money/cash under the mattress - nobody else has the ability to "spend" the money in the mean time. Also called "sinking funds" by Keynes. This is money kept in a bank deposit. The important point is that you can, at any time, choose to "stop saving" the money and spend it. i.e. you keep the right to spend the money at any time. Nobody else can make use of it. It effectively is out of circulation until you choose to spend it.

2. Investment: Lending the money to someone else for a fixed term - you can't ask for the money back before the end of the fixed term. They can spend it on goods/services for that period of time after which they have to pay it back. The money stays "in circulation".

Absent fractional reserve banking, #2 is the only thing that can actually generate a real return. i.e. real, profitable, economic activity that makes people better off. Without FRB, a checking account cannot pay interest, because #1 cannot be used in any risk-free way to generate value.

For economic productivity, #2 is a good thing, #1 is a bad thing. The fact that everyone is trying to do #1 right now with US dollars and the like is what is considered to be the source of our current economic malaise (according to the economists that I agree with anyway). Fractional reserve banking, QE and the like to some extent lets money that is in category #1 be used for "economic good" in category #2 - effectively fooling the hoarders into "investing" their money.

Of course, bitcoin doesn't have fractional reserve banking - and pretty much by design seems to make it impossible for things in category 1 to be used as category 2. This is why the "hoarding" of bitcoins is considered to be deflationary.


>Saving/Hoarding: Keeping money/cash under the mattress - nobody else has the ability to "spend" the money in the mean time. Also called "sinking funds" by Keynes. This is money kept in a bank deposit. The important point is that you can, at any time, choose to "stop saving" the money and spend it. i.e. you keep the right to spend the money at any time. Nobody else can make use of it. It effectively is out of circulation until you choose to spend it.

One of the big problem with Keynes.

Money stuffed in a mattress = hoarding.

Money in a bank deposit = still in circulation, able to be lent by the bank.

There is a massive difference. Other people can make use of funds deposited in accounts. This is why banks take in deposits, to lend it out at a higher rate and pocket the spread.

I would also quibble with your definition of investment. Investment should be classified as spending in the expectation of a financial return (ie, not the joy of owning a new shirt, but actual cash returned on cash outlaid). You can say 'a fixed term' but that is a nebulous concept. 24 hours is a fixed term, so is a week, so is a year, so is 30 years. Lending someone money overnight so they can arbitrage some goods by moving them physically from one location to another one is just as much investment as sinking the money into a toll road for 50 years. The definition has to be on the intention rather than the time horizon, otherwise you're just being arbitrary to support an argument.

>(according to the economists that I agree with anyway)

Highly likely you agree with Krugman. I think he speaks out of his hat. We'll leave it at that.


> There is a massive difference. Other people can make use of funds deposited in accounts. This is why banks take in deposits, to lend it out at a higher rate and pocket the spread."

Only because of fractional reserve banking. Absent fractional reserve banking, if I 'lend' something to another party but reserve the right to demand it back at any time, then there is simply no way that they can "use" what I have lent them. They can't use the money while still honouring my right to demand the money back at any time.

So "money" which can be demanded back at any time has he same status as money kept under the mattress - it can't be profitably "used" by anyone.

The important point about "investment" is that the money trully is "tied up" - if only for a day. The shorter the term, the more like "cash" it is.

> Highly likely you agree with Krugman. I think he speaks out of his hat. We'll leave it at that.

You're right I do agree with Krugman (for the most part). I've been reading his blog since 2007 and you know what, he's been right in his predictions pretty much the whole time!

If you think Krugman is speaking out of his hat, who do you recommend instead?


I believe you're speaking of CPI-style inflation, or in the case of Bitcoin, deflation.

The rate of money supply inflation of Bitcoin is around 12% at the moment[1]. Until 2025 or so, the supply inflation rate of Bitcoin will be greater than 1%.

I fully expect the exchange rate to fluctuate wildly until a much larger supply of fiat and Bitcoin sits on both sides of the exchange order book. As it stands, a moderately-capitalized trader could throw the exchange rate around at a whim.

[1] https://bitcointalk.org/index.php?topic=130619.0


>"It seems there's a hesitation of spending bitcoins knowing if you just wait a day it will go up"

So this argument seems to be rather popular, and on the surface it does seems to make sense. However, it glosses over an important consideration.

Are you buying goods in USD or BTC?

Now if it's the latter, then yes there may be stronger psychological pressure (even though rationally there is not much difference).

However, increasingly goods are being traded in USD using BTC as a backing, in which case it would make little difference if you spend in USD from a bank account, or USD with a bitcoin wallet. Because it is possible to trade USD for BTC almost instantly. (Let's ignore the issue of wire transfer delays for now, because that doesn't change the overall argument).

Consider the person holding say 200 bitcoins today. In this situation, if the transaction is a small amount (say a cup of coffee ($3), at 180USD/BTC is about 0.016 BTC at todays rate. I'd have no problem spending that.

In fact psychologically it may be more likely that people trade with their BTC "winnings", because like a casino it has been shown that that is treated as more disposable than "real money".

Thus if one is spending a fraction of a bitcoin priced in USD, and this amounts to a small percentage of your overall position, it's unlikely to endure as a significant purchasing disincentive.


I've spent many years studying economics, but I'm also a programmer. One thing that annoys me about the discussion that tends to crop up on Hacker News is that you have too many of the latter issuing too many uninformed opinions on the former. Currencies that are doomed to deflate are doomed to enter liquidity traps. There is nothing special about BitCoin that prevents this from happening, regardless of its position against other currencies. There are probably ingenious ways to implement distributed digital currencies, but I'm fairly sure that in the long term BitCoin is not one of those ways.


I was wondering why no one ever buys or sells houses. But your post makes it perfectly clear: Since no more land is created, real estate is deflationary, so obviously no one would ever sell a house.

</sarcasm>

Also, I'm happy you're patting yourself on the back for all your training and experience. But you still need to make a compelling argument instead of just saying "Things are just so."


But the housing industry is barely recovering from a "liquidity trap" in 2008! People weren't selling houses because they expected home prices to constantly go up. You had people flipping homes and adding no value to them. Eventually, the market crashes after too many people buy homes that they didn't need...

Note, its not that people "don't sell homes", it is that homes are prone to rampant speculation that can bring down the entire industry.

His claim is that a fiat currency (ie: Dollar), can repel the liquidity trap with monetary policy. IE: Carefully controlled inflation or deflation.


OK, you made some good points I will have to think about more. I would think the type of deflation we're discussing had only a small role to play in that crisis, but I admit it probably played some.

Of course the irony is that monetary policy causing unreasonably low interest rates (i.e. controlled inflation) was a large factor of that crisis as well.


Got an argument to back that up?

Why don't you go through the charts. Find me the year that the Fed caused too much inflation, and then tell me how much the dollar was inflated that year.

I doubt you can, because during the housing crisis, the dollar experienced deflation. The Fed acted swiftly, although not swift enough! The dollar failed to hit inflation targets in 2008-2009 as we experienced -0.4% inflation.

For the 2009 to 2010 years, we only experienced 1.4% inflation. Both years, we missed inflation targets of 3%. Worse, the dollar deflated in value in one year.

Every other year, inflation has been the same as always: ~3% since 1990.

Economic data does not match your words. The US hasn't had inflation over 4% for the last 22 years. There is no inflation problem.

If the goal of ~3% inflation is a poor goal, then tell me why.


"The housing bubble was fundamentally engendered by the decline in real long-term interest rates"- Alan Greenspan


I'm happy you're patting yourself on the back for your knowledge of a single quote by a single economist. But you still need to make a compelling argument instead of just saying "Things are just so."

See how this game is played?

EDIT: Well, that was probably too mean. So lemme ask you this.

Since you agree with Alan Greenspan so much, please tell me what caused the long-term decline of interest rates. HINT: it wasn't the fed, according to Greenspan. After all, the Fed raised interest rates in 2004 and 2005.


The difference between a normal currency and bitcoin with regards to deflation is that bitcoin is almost infinitely divisible, whereas traditional currencies are not.

Divisibility acts in opposition to deflation to create liquidity.

The idea is in the future you don't trade bitcoins per se, but microbits, or picobits etc (or whatever they will be called).


Economies get in a liquidity trap must faster than division becomes a problem. The difference is so marcant that almost nobody even talked about divisibility before the bit coin people.

Anyway, I'm not sure the expression "liquidity trap" means anything when talking about bitcoins.


Satoshis are the lowest denominator of bitcoin, being .00000001BTC


The reason people aren't paying for things is because its hard to do, not due to deflationary concerns. If I were confident that the purchasing power of bitcoin was going to continue increasing relative to the USD, and all vendors accepted BTC, I'd immediately move over all of my USD to BTC and spend my BTC on a daily basis.


I don't believe the argument that a deflationary currency, by itself, will make people not be willing to buy things. Consider a savings account - why would anyone take money out of their savings account to buy things? If all you need to do is keep it in the account, it will make more money, so why spend it?


It's a question of degree. Your savings account with $100,000 in it will probably be worth about $100,002 tomorrow at current rates. The equivalent amount in Bitcoin might be worth much, much more at the rate it's been climbing.


Or much much less, and it has been known to fall precipitously as well.


It doesn't matter if you keep converting your income to BTC as it comes in, because if you spend your USD income instead of buying BTC then you've effectively done the same thing.


That bit of economics common knowledge was also developed before our modern, dynamic, fast, globally interconnected economy. I'm sure there's still some technical truth to it, but I wonder if it's as true now to the degree it was back in, say, the Depression era.

It would be interesting to see how a currency with a set rate of deflation instead of inflation worked now. Say your money gained 3% per year purchasing power instead of lost it, everyone would be incentivized to spend or invest it in ways that returned either ROI or utility worth at least 3% per year, or otherwise hoard it. Spending and investment (or at least malinvestment) would slow, but capital formation would increase.

Too bad there's no way to test such a thing, see how it works out in practice. BTC unfortunately does not seem to provide a stable rate of deflation, at least for the foreseeble future.


You mean... the Japanese Yen?

Nintendo and Sony posting record losses as the Yen continues to get stronger and stronger. The $200 Wii console sold the best in 2010 and 2011, but because the Yen deflated so much Nintendo lost money on the USD -> Yen conversion and overall didn't do so well.


Savings accounts don't beat inflation. No guaranteed and insured investment does to my knowledge(if you find one that's not a ponzi scheme, let me know). If savings accounts were paying out double digit % point gains, you can bet your ass people would be shoveling money into the accounts and not cashing out.


They aren't beating inflation right now, but historically they have. ~5 years ago interest rates on savings accounts were roughly 5% and inflation was 3-4%. Empirically, people actually saved less during that time (although there were many other confounding factors).


Point taken, but we're still arguing apples and oranges. Savings accounts historically maybe earned a percent or two above inflation. There's very little incentive to just let money sit in a savings account at those rates. Even extremely low risk investments are better.


Where does an investment get its value from? If saving gives me 3% but investing 5%, how does the deflationary nature of the currency change these two numbers? Surely investing gives a greater return as value is created regardless of whether the economy is inflationary or deflationary. You know, we have only had pure fiat money for the past 40 years. Before the we advanced from the dark ages to the 21st century with a deflationary system.


Most savings accounts pay under the rate of inflation, and rely on someone (usually a bank) being willing to pay you that rate to get you to give them your money. Presumably the utility they derive from this makes it profitable for them.

BTC on the other hand, if we reach a steady deflationary state, beats the rate of deflation by definition, and just by you sitting on it. Slightly different situation.

--edit-- Also see here - http://eprint.iacr.org/2012/584.pdf It seems the ~80% of bitcoin are long-term dormant, so people are just holding on to them, regardless of the reason.


Yeah parent would have been precise referring to the low volatility rather than the inflation of fiat money.


According that argument you shouldn't spend any fiat currencies either, because it's more profitable to convert all of it to bitcoins.


Isn't that spending it on bitcoins?


The problem with bitcoins is as they become more popular, the demand is rising faster than the bitcoins are 'minied'. Technically speaking, if the demand stayed constant, the supply would slowly raise and the value would inflate (until all the coins are mined).


This would solve the issue of people buying Bitcoin for investment purposes, though I wonder if people would be able to spend Bitcoin fast enough to not keep the people / exchanges that act as banks negatively affected - or could this be differentiated easily?


I think you mean inflation, where the longer you hold on to it , the more worthless it becomes.


Thanks and you're correct, I was thinking how deflationary bitcoin was while writing. Fixed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: