Document stringlengths 395 24.5k | Source stringclasses 6 values |
|---|---|
Five Nights at Freddy’s is a video game franchise created by game developer Scott Cowton and currently has over 9 Five Nights at Freddy’s games.
Description of the game 📑
Five Nights at Freddy’s is one solid screamer. The horror genre is ancient, unimaginably boring for seasoned connoisseurs of horror, but still in demand among the “uninitiated”. In the first three games, the player controls a Freddy Fazbear’s Pizza night guard named Mike Schmidt, who must use CCTV cameras and other tools to survive against the animatronic characters. Animatronics Clowns are bloodthirsty and dangerous, they instill fear in everyone. These are the characters that FNAF Mod will add to Minecraft PE. Animatronics clowns are able to run and move quickly. They need all these functions to catch up with you and bite you. These animatronics have been waiting for a long time when they can crawl deep from the ground to grab you and kill you. One salvation from their deadly bites is to flee.
You also have to hold out for five nights. Go for it. Download our FNAF Mod and completely dissolve in the Five Nights at Freddy’s game.
FNAF Mod has all three versions of the world of Fnaf 1, Fnaf 2 and Fnaf 3, which have their own mechanics and new animatronics. The security guard in the office has no doors, but there are controllable shock buttons.
Animatronics can only be seen with a camera. If you are a night guard, make sure all other players choose which animatronics they will be at night, otherwise they will not be at night. 🧸
By the way, you took it that the Five Nights at Freddy’s franchise was included in the Guinness Book of Records as the series of games with the largest number of sequels in a year.
At the moment, four parts are available on the official website.
Fnaf 1 (2014),
Fnaf 2 (2014),
Fnaf 3 (2015) and Fnaf 4 (2015) as well
Sister Location (2016),
Freddy Fazbear’s Pizzeria Simulator (2017),
Ultimate Custom Night (2018), versions:
Five Nights at Freddy’s: Help Wanted (2019),
Five Nights at Freddy’s: Security Breach (2021),
FNaF World (2016), Special Delivery (2019),
Freddy in Space 2 (2019),
Security Breach: Fury’s Rage (2021) and many others.
The application is not an official Minecraft product, not approved or not associated with Mojang. | OPCFW_CODE |
View Poll Results: What's the best distro for new linux users without much computing experience?
- 20. You may not vote on this poll
Other (Please Specify)
Results 11 to 18 of 18
Originally Posted by oosterhouse ps - Why couldn't you get pclinuxos working? Not compatible with your hardware or something? It worked but was slow as thick molasses. It had something ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 01-29-2006 #11
- Join Date
- May 2004
- arch linux
Originally Posted by oosterhouse
This happened all of the four or five different times I've tried it in the past, so I don't try it anymore. I'm sure it's a good distro, but it didn't like my hardware for whatever reason.
That one and Yoper are the two distros that seem to never want to work on my machines, for some reason.
- 01-29-2006 #12
After putting it off for years, I finally took the plunge and started trying out various distros.
So far I've installed (and formatted) Suse, Mandriva, Zenwalker, Ubuntu, DSL, Slackware and PClinuxOS. I've briefly tried others, but that's all I remember off the top of my head. I plumped for Mandriva 2006 with KDE in the end, simply for ease of use and user friendliness.Korean food is great - it's the dog's bollocks!
Linux user number 406572.
- 01-29-2006 #13
- Join Date
- Jan 2006
I'll give a nod to Xandros.....and yes I know that the "purists" don't like a corporation distro....but....I'm the kind of person that just needs it to "work"...I lead a very busy lifestyle and HAVE to have a computer that does what I need when I turn it on every time....no ifs ands or buts.....
But....actually I am hueing to the GNU ideal....I used nothing but the OCE editon for the last 7 months...and after buying a Brother printer(another thread) I am now Windoze free....except for a dual boot I have to have for the j/c whereat I teach...it will only recognize IE....yes Firefox can get IN....but nothing but text....
Anyhow...I've installed Xandros with NO problems first time, every time on everything from 486 machies to P4 1.3 ghz....now yes....the 486 runs SLOOOOWWW....and I didn't donate it, it is still in the garage....but PI's run pretty slow, PII ok, but won't run Tux Racer(that is my kind of "benchmark") PIII 450mhz is getting there...Tux is still jiggy....but PIII 750 mhz and Tux runs fine...
I put it on old Creatives and Rages...HP's below the 700 series are just found, 128 mb of ram is fine, but 192 makes things quite zippy...normally use 3.2 gig HDs for donation machines, but have been down in the twos but swap files are small...so they are slower....
LIttle ol' ladies...... a man with alzheimers that just wanted to play pinochle online(?that one was strange)....kids doing school work....it just works.....four clicks...one to tell it whether to take over the HD or dual boot, one for time/date....one for printer(accept it, XS has already found it...just wants approval) whether you use your right or left hand for your mouse(?) I guess they had to ask SOMETHING...and that's it....it doesn't ask you about finding the net....it just does it and sits waiting with a nice blue screen...
Now....I could have stayed OCE...but I paid a premium membership so that I could just click other apps I need rather than getting them with apt-get.....which I can now comfortably learn instead of HAVING to learn it...
But.....that's why I clicked "other"
EDIT: PPS.....and becuse XS was so easy, I was able to come here....and serendiptiously find a thread on "home theater system" which gave a link to Atomic Linux.....and a friend of mine is going to try it and I have enough confidence to also try it myself!"
ain't xandros great!
- 01-29-2006 #14
This thread is probably going to be locked, as there have been countless polls and questions refering to the most "newbie friendly distro". I would say any of the distros suggested previously in this thread are fine.
- 01-29-2006 #15
- 01-29-2006 #16
1. I agree with what ~tux~ said...this thread is likely going to be locked.
2. While I am here, I would have to say all of the options listed above.
Last edited by bryansmith; 01-29-2006 at 04:20 AM.Looking for a distro? Look here.
"There can be no doubt that all our knowledge begins with experience." - Immanuel Kant (Critique of Pure Reason)
Queen's University - Arts and Science 2008 (Sociology)
Registered Linux User #386147.
- 01-29-2006 #17
Originally Posted by Dapper Dan
- Join Date
- May 2004
- arch linux
- 01-29-2006 #18
Alright Super Moderators go ahead and lock it. I was just looking for an opinion of what I thought was a different situation than the other threads described. I've got something installed on it so lock away! | OPCFW_CODE |
M: Ask HN: Love Coding, Hate Marketing - bluedevil2k
I suspect I'm like most users on HN in this regard, but there's nothing I enjoy more than coming up with a good idea and sitting at my computer for a good hard week(s) of coding it up and turning it into a good/great web application (in my opinion at least). I like the coding, I like the design aspect of the page, I like testing it, I like seeing it all come together nicely in the end and turning it into a nice product.<p>However, when it comes time to market and sell the web application, I hate it. I hate finding people to e-mail about it, I hate trying to convince leading industry people to look at it. I hate cold calling people trying to sell them on the site. I hate trying to figure out Google AdSense to get my ads into a winning strategy.<p>The bad part is, I like making money too. So, I can't do that at all without loving both aspects. What should I do? Try to find someone who will handle all the selling/marketing? I'm sure there's lots of people that love that part of a business. Pack it in and call it quits because I don't have "it"? Any recommendations?
R: martingordon
I'm the same way. The App Store helped me (although it hurts in the long run)
because I am selling despite not doing one bit of marketing (aside from a few
tweets here and there to my 100 or so followers).
R: jaspalsawhney
How about I help you with whatever needs to be done to sell? I'm a jack of all
trades can understand design/coding/UX and also like the aspects of marketing.
contact me at
R: bobds
Same here. I might be interested in helping someone sell his product.
Contact information in my profile.
R: aspir
I'd recommended finding a code-savvy cofounder who is interested in
marketing/has marketed before, and bringing them on as a minority partner. Not
all marketers are tech morons -- the ones that aren't know a good product when
they see it, get excited, and want to tell the world about it.
It's one of those innate desires. When you see a great application, you may
want to get deep down into the inner workings and understand/improve it (I'm
assuming). The marketer you're looking for should be interested in how it
works and how to improve it, of course, but he should also clearly see your
current product and future iterations on everyone's desk innately. That way
the two of you can chase your respective visions: you with the better product,
the partner with getting the product out into the world.
R: kineticac
What if what you really hate is getting other opinions on how good it is? You
say "in my opinion at least," and you hate trying to convince people, and
finding a winning strategy. What if instead of trying to force yourself in,
you figure out why these people aren't responding and how you can make them
come to you? Maybe your idea only sounds good to you. Marketing, selling, etc.
is not just trying to get something through to someone else, it's learning
what they actually think is good and cool, or what is bad and needs to change.
Your mentality and approach could be wrong. Not sure if this is the case, just
a possible scenario.
R: moilanen
I would suggest looking at a few of the "Match.com" for startups sites:
<http://www.startupwithme.com/> <http://www.techcofounder.com/>
The good news is that since you're technical, you're one of the women on
"Match.com". Everyone will come to you.
R: ashleyreddy
Even Marketeers don't like marketing. Its a tedious pain in the ass. You
shouldn't absolve you self of all marketing responsiblities. Read some
marketing books and blogs. At present no one will know your product better
than you.
R: DocuMaker
Try to find someone who will handle all the "selling/marketing?" YES!! | HACKER_NEWS |
For all users of Atmosphir (http://www.atmosphir.com), the beta desktop client has been released. Providing a lot better performance, fast loading times, smooth editing, all because it runs from your desktop!
grab a copy at http://www.atmosphir.com today, and keep making those great levels.
The educational module I am developing is nearing completion, starting next monday it will be tested in a group of pupils for the first time. If you didn't get a look of it yet, head over to http://www.learnlab.nl/JCU/index.html to get a peek of what's in store!
As my activities expanding, the need to move this website became evident.
Moved now to Deziweb (http://www.deziweb.com) for bigger storage/better performance and bigger bandwidth, while saving a few euro's.
So if you found this blog offline the last few days, thats the reason.
To those few that actually read my blog heres a new post.
It has been quiet the past few months, mainly due to my new job. I started working at the University of Utrecht at the department bèta. The part I am working at is at the Junior College (JCU for short) where I am developing an educational module on the subject of gaming technology in dutch.
This module will use Atmosphir as gaming development environment, which is a bit like Minecraft, but then cool . A preview of the module (in dutch) is available at www.learnlab.nl/JCU/index.html, feed back can be sent using the contact option here.
So please let me know what you think of it, and try Atmosphir for yourself at www.atmosphir.com!
Expect some moe minor and major posts in the coming few weeks, stay tuned!
For now I wish you a very Merry Christmas and a Happy New Year
Door het Ichthus College te Veenendaal wordt je/u uitgenodigd aanwezig te zijn bij een symposium over Informatica en Geloof. Voor meer informatie bezoek www.informaticaengeloof.nl
Het symposium wordt gehouden op 26 oktober, en toegang is gratis na aanmelding.
Hey (to whoever is reading this),
Recently there has not been a lot of activity going on on my blog, which was mainly due to changes in work, vacation etc.
For the vacation, I'd like to share the following video of us going through the Cheddar Gorge by bike, for a small impression.
For clarification, we did the End-2-End fully self-sustained in 1406 miles, in about 4 weeks, overal the weather was not too bad, although we got the usualy England/Scotland rain.
For the near future, expect some updates, Ebox 2.0 is out, and Ill be developing a module in gaming technology at the JCU (which also is my new employer). Till then, it's me /overandout!
just a quick update. Ebox is still rocking at a small to medium office I provide the services for. Some minor issues with openLDAP and openLDAP again since I first wrote about it. Some manual labor in restoring a backup but in general it has been doing great.
For all those that use ebox themselves, the interface is also available in mobile browsers, tested on opera 4 and 5, BOLT (for blackberry) and Safari on iPhone.
Stay tuned for the release of their new (2.0) version, based on Ubuntu 10.04LTS which will be reviewed here once released and tested.
Recently I discovered a novelty (for me) on the hunt for groupware solutions, called ebox or ebox-platform for full (http://www.ebox-platform.com).
This solution does not focus strictly on groupware or e-mail but provides a fully costomizable office package with DNS,DHCP,LDAP,MAIL,PROXY and a lot more solutions, based on the LTS release of Ubuntu (8.04 LTS that is).
The first installation of ebox 1.2 was a breeze, and in general performance and configuration is very easy. Setting up as a groupware server they decided to include egroupware, not really my choice. Fortunately in the 1.3 beta they now included a solution based on roundcube, which works a lot faster and looks a lot better.
Next step was the installation of ebox as a LDC (Local Domain Controller) with some Windows XP/Vista/W7/Ubuntu clients, with different types of users, a brother laserprinter and shared folders. The setup was very easy, and after creating the users everything was a walk in the park. Performance is definately at a level that it can compete with Windows Server 2003/2008 based AD solutions, and the additional web-management makes remote management a delight.
For the true demanding users, it might be a good idea to wait for the Ebox 1.4 release, which will (hopefully) have Ubuntu 10.4 LTS as a base. You can get your own copy at http://www.ebox-platform.com at no cost.
Since Matrox only releases drivers on the (less then ideal) ftp protocol, I thought it might be a good idea to create a Matrox Downloader Utility.
This utility will have no problems with the ftp protocol and let you save the files to any chosen directory.
I already finished the first half of the project, the only thing remaining is the actual download from the Matrox Server in a multithreaded way.
let me know if you want this utility, and what improvements you would like to see on it. The application is developed in VB.NET (framework 3.5) and works on any windows above (and including) Windows XP.
At this moment the first edition of the downloader is finished.
Please report back any problems you encounter, through PM (here) or mail: info [at] davidbezemer [dot] nl
The installer can be downloaded at:
The new pysclaix made by Rene Hadler (http://tih.dynalias.net/python/scalix_installer) is now updated at my mirror location.
Mirror location can be found here: http://www.davidbezemer.nl/downloads/pyscalix.zip
A short description of the installer:
Scalix Installer version 0.11.4.5 for Scalix 11.x.x
* Support for version 11.4.5 added.
* Fixes for hostname/FQDN check.
* Python interpreter (Version 2.4 or 2.5 tested)
* Scalix package 11.x.x (you have to copy the scalix-package into the same location as the installer)
* Debian based system (Debian 4.0 "Etch" or Ubuntu 8.04 "Hardy" tested, all 32bit)
* Ubuntu versions older than 8.04 will not be supported anymore but code is still present
* Important: Please use, if possible, the latest scalix version as code updates will only be present on the latest version for the installer.
* Hint: Working on Debian 5.0 / Ubuntu 8.10 releases, but there seems be a lot broken packages, so I will promise nothing at the moment. Maybe the next Scalix Version will be 'optimized' for the new OS releases.
* Automatic check of required tools and libs
* Automatic installation and configuration of Scalix-packages and their dependencies
* Only a few important things will be prompted
* Installation will take approximately 10 minutes (depends on the speed of your computer and internet)
* German and english installation language | OPCFW_CODE |
<?php
#[Attribute]
class Cart
{
}
class Checkout
{
#[NotEmpty]
public ?string $item;
}
// use reflaction
function validate(object $object)
{
$class = new ReflectionClass($object);
$properties = $class->getProperties();
foreach ($properties as $property) {
validateNotEmpty($property, $object);
}
}
// for get property
function validateNotEmpty(ReflectionProperty $property, object $object)
{
$attributes = $property->getAttributes(NotEmpty::class);
if (count($attributes) > 0) {
if (!$property->isInitialized($object))
throw new Exception("Property $property->name is null");
if ($property->getValue($object) == null)
throw new Exception("Property $property->name is null");
}
}
$request = new Checkout();
// $request->item = "";
$request->item = "1";
validate($request); | STACK_EDU |
[gnutls-devel] Speedup idea...
n.mavrogiannopoulos at gmail.com
Fri Aug 5 14:30:52 CEST 2016
On Fri, Aug 5, 2016 at 2:04 PM, Tim Ruehsen <tim.ruehsen at gmx.de> wrote:
> On Wednesday, August 3, 2016 10:19:54 AM CEST Tim Ruehsen wrote:
>> My goal is to only load that CA cert(s) that really have to be checked
>> against. I need to create a hash from the server certs which 'point' to the
>> CA cert files on disk, like OpenSSL already does. Well, we talked about
>> that in the past and you pointed me to p11kit... but in fact, I so far do
>> not really have a 'big picture' - the p11kit docs are mostly technical
>> details, no understandable explanation what 's it all about.
> Hi Nikos,
> maybe you can help me.
> I found no OpenSSL-like subject hashing in p11kit, so I looked at the source -
> and it *basically* does a sha1 sum of the certificate subject.
There is p11_openssl_symlink() which does some magic there, including
md5 hashes. This may be out-of-date though as this bug indicates .
> Doing the same in GnuTLS certtool fails (but I am close:).
> The 'subject' in OpenSSL (same cert) has 95 bytes and looks slightly different
> than what GnuTLS gives me (97 bytes).
Did you try using gnutls_x509_crt_get_raw_dn() or the issuer equivalent?
> The hexdump of OpenSSL's subject:
> The hexdump of GnuTLS's subject:
> With GnuTLS, I used
> asn1_der_coding(cert->cert, ""tbsCertificate.subject", ...)
> Well, is there some kind of 'ASN.1 normalization', or how can I retrieve the
> same bytes that OpenSSL shows ?
It seems the latter includes the SEQUENCE bytes of RDNSequence, while
the former has these removed. It seems (without having fully checked
it) that p11_openssl_canon_name_der() in p11-kit's trust module does
something similar. The comment: "Yes the OpenSSL canon strangeness, is
of all the RelativeDistinguishedName DER encodings, without an outside
wrapper." implies that.
More information about the Gnutls-devel | OPCFW_CODE |
Ubuntu Xset Unable To Open Display
Try "xhost +" from a shell logged in, it will still work. Thats a pretty I did this morning X11Forwarding was enabled by default. Visit Chat Linked 1 error: pop over to these guys Is this an xorg-server bug or upgrade-manager?
Raspberry Pi Xset Unable To Open Display
Actually, the resolution symptoms reported here. I'm getting "xset: unable to Gtk-WARNING **: cannot open display: set | grep DISPLAY returns nothing. Donald Trump say that "global warming was a hoax invented by the Chinese"?
configuration file? The college in 'Electoral College' How to respond the remote machine and receives the graphical output. Because other graphical (X11) client could sniff data from the remote machine (make screenshots, Vbetool New Connection Message Received: but should work for you.
If you want to know more about those things I If you want to know more about those things I Xset No Protocol Specified All A published paper stole my unpublished results from a science maximum integer everytime Does "Excuse him." make sense? on Jyn Erso's back?
You've tagged your question with "Ubuntu", where Gtk Cannot Open Display Overflow Gives Back 2016 Developers, Webmasters, and Ninjas: What’s in a Job Title? Writing a recommendation letter for a student I reported for academic dishonesty In US, is it a good idea my spaceship from dying after a hull breach?
Xset No Protocol Specified
Show message on products (view.phtml) within specified category only allows advanced system and memory access unlike other web programming languages. UNIX is a registered UNIX is a registered Raspberry Pi Xset Unable To Open Display Xhost Unable To Open Display installed as far as I know... Is it a graphics card (ATI Mobility waterproofing a building's first few floors?
i thought about this comment| up vote 0 down vote Credit to http://unix.stackexchange.com/a/12772/61349 for their diagnostic instructions. answer on this also, and finally found it. At least according to the same post credited Overflow Gives Back 2016 Developers, Webmasters, and Ninjas: What’s in a Job Title? Ssh Xset Unable To Open Display trademark of The Open Group.
Browse other questions tagged power-management Shaded2 ok so it your script works What is the intuition behind the formula for the average? my site to me not that wise... Can a typeface be designed to have characters
Xset Dpms machine? 4 why won't x11 display work through ssh login? Stop.(7166) 酷派D539 APP2SD(扩展内存空间\把应用安装到扩展卡)的方法(4212) C语言中死循环的三种写法(3741) 解决xset: unable to open display ""问题(3581) Ubuntu(Linux)使用alt+c关闭显示器命令(3509) Dave @Daniel B: Thank you rights reserved.
Powered by vBulletin Version 4.2.2 will go dark.
Why is Titanic's Astor asking if What are the considerations for and the ability to locate the display (the environment variable should be visible). Export DISPLAY=remote machine name:0; xset dpms force Xset S Off session (via "sudo -i") from the desktop users session?
Writing a recommendation letter for a student I reported 20:44 spinup 29426 This worked for me. I gave So it's running, but then when I select dig this suggestions ? Is it possible to authorize the root
I'd like to know if, given ssh access, I can turn OS that localhost and 127.0.0.1 are equivalent, but it works, at least. | OPCFW_CODE |
will start playing around with it soon!
will start playing around with it soon!
Welcome to Smashboards, the world's largest Super Smash Brothers community! Over 250,000 Smash Bros. fans from around the world have come to discuss these great games in over 19 million posts!
You are currently viewing our boards as a visitor. Click here to sign up right now and start on your path in the Smash community!
This happened to me with PlZdRe.dat, it didn't load all of the eye textures.I decided to try this program out to edit PlLgWh.dat but it didn't load all the textures. I think it only left out these textures: eye closed, eye half closed, eye hurt, eye looking left, and eye looking right.
I know. I just haven't gotten to implementing it yet.you know... resizing the DAT isn't all that hard :/
you just have to change the pointers that link to data after the affected (resized) portion of the DAT.
you can easily use the pointer table to figure out the locations of every valid pointer and where they point to so you change them appropriatly
How do you edit move properties? Like the power of a move like BKB and KBGMelee Toolkitv1.03 Alpha
NOTE: This program is in alpha.Please back up any files before modifying!
* You can now edit certain values in .dat files using the node value table.
* CToolsWii was re-introduced because LibWiiSharp cannot convert to CMPR. It is used only for converting to CMPR and nothing else.
* Editable node values are dynamic and displayed in bold. However, editing these values still does nothing.
* Fully migrated from CToolsWii to LibWiiSharp. All texture formats are now supported for extract/replace. Replacing an indexed texture is limited to as many colors as was used in the original texture.
* Bug fixes
* Initial release
Introducing a new program: Melee Toolkit. This program will provide easy and automatic .dat and .iso editing.
Features (for first release):
Functionality is somewhat limited for the time being. You can export/replace files from a disc image and export/replace character textures, and that's about it. Not all texture formats are yet supported.
- Export and replace textures in .png format
- Load and save directly to .iso, plus export files
- Edit .dat node values on the fly
Patch .iso between v1.00 and v1.02(future release)
Please report bugs!
I'm providing an early release so I can hopefully catch some bugs early before going crazy with features.
This program is currently not capable of doing that, as most node information for character .dat files has not been added. If somebody would like to add this functionality, I'm willing to help (though this is a backburner project so I won't be adding it myself anytime soon).How do you edit move properties? Like the power of a move like BKB and KBG
Is there a program that can do that?This program is currently not capable of doing that, as most node information for character .dat files has not been added. If somebody would like to add this functionality, I'm willing to help (though this is a backburner project so I won't be adding it myself anytime soon).
I believe the program known as Master Hand will show you many move properties as well as the file offset, but you must edit them yourself in a hex editor.
http://www.smashboards.com/threads/tool-master-hand-v1-20-melee-character-file-viewer.313930/Where do you find this program? I searched around for it before and couldn't find it
Mainly because the format is not fully understood, so it would take a ton of work for a flawless implementation. I highly doubt it will happen unless someone else decides to contribute to this project.Is there a reason audio replacement is so low on the list? 'cause I do it more often than anything else and it takes forever.
The repo is currently public on Google Code. The link is in the OP. Unless you were specifically asking for a git repo as opposed to the current one?Hmm, I just found the link to this thread. I'll checkout the source tonight and see what can be done about maybe getting an editor for move stat values up and running. Caveat: I've never worked in C# before (only python/C/Matlab) so it will take me a while to figure out how anything is done.
@ Dan Salvato Could you please create a git repo of the source for the Melee Toolkit?
Okay, I imported the existing repo into a new BitBucket repo and we'll use that one from now on. I prefer BitBucket anyway and maybe now you can branch/merge instead of fork, since I'm working on some stuff as well. If you PM me account info I'll add you to the project.Hi Internet, I was, I use git and for some reason I get "bad url" when trying to use git-svn with that repo link. | OPCFW_CODE |
ScalareA minimal (and elegant) Twitter client
download v1.0b61 (3.7mb)
What is it?Scalare is a Twitter client built as minimal and unintrusive as possible. It shows your friend's tweets as Growl notifications. It runs on Mac OS X 10.5 and above.
Scalare is built with simplicity in mind. It is meant for the casual twitter user that doesn't want to be distracted too much. I chose a windowless UI because that's how I think Twitter should work; minimal messages you read and keep going. Feel like sending a tweet? Just press the easy key-combo, type what's in your mind, and you are back to work.
It allows you to send tweets without stopping your workflow. And because Scalare uses Growl, the customization options are endless. Go into System Preferences and set your custom display styles under the Growl preference pane.
How to use it?Scalare resides in your menubar; it doesn't take any precious space in your Dock. Scalare will fetch new tweets every 90 seconds, and will display them through Growl. Click on a notification to reply to a tweet.
To send your tweets, just press CTRL + ↑ or your favorite keyboard combo and the Scalare input window will show up. Just type in your tweet, and press enter. Done!
To send a Direct Message, use another key combination, and the DM window will show up. You know what to do next.
TipsAlt + click on a Notification "cross" closes all Growl notifications at once.
Lots of customization is available through Growl. Set different notification styles for different kind of updates (replies, DM, updates), including custom sounds. Go to the Growl Preference Panel under System Preferences to set custom behaviors for the Scalare Application.
Known IssuesRequires Growl to work properly. It's embedded into the application, will ask to install if not found.
*And remember, This is beta software! Scalare was first coded in under 24 hours. Don't expect extreme polish.
Is it free?Yes! If you really want to, you can always contribute to my Pay-Pal account. If you want to include it into any CD-ROM or other media to mass-distribute it, please email-me.
v1.0b61 - 4/1/2010 -Fixed some old references to the old app name in the UI. -Reinterpret '<' and '>' to show properly on tweets. -Fixed some typographic errors. v1.0b6 - 3/1/2010 -Renamed to "Scalare" due to trademark issues. -Fixed the whole ID overflow thing. No more missed tweets. -Added visual feedback when replying to a tweet. -Added Oauth authorization, got rid of login & password authorization. -Scalare's windows fade in and out when opening and closing. v1.0b53 - 12/9/2009 -Fixed ID's bug where no updates would be shown. -Growl notifications are sorted by type (reply, DM, etc). -Growl notifications have more appropriate titles. -Less verbose. -Updated to last Growl Installer framework. v1.0b52 - 31/8/2009 -Error Messages are not set as sticky any more. -Updated embedded Growl-WithInstaller Framework to 1.1.6 v1.0b51 - 21/6/2009 -Fixed problem related to ShortcutRecorded that caused crash at launch on PPC machines -Fixed bug whre some DM would not be shown v1.0b5 - 21/6/2009 -Fixed Twitpocalypse bug by updating MGTwitter Engine. -List and send Direct Messages -Updated embedded Growl-WithInstaller Framework to 1.1.5 -Replies are really replies now (sorry!) -Finally Show replies (@) and Direct Messages -Keyboard Shortcuts are now configurable -Added keyboard shortcut for Direct Messages -Handle tweets with multiple URL's -Input windows always on top (wont get lost anymore) -reply to a Direct Message by clicking on it -More Growl categories to allow better customization -Report when reaching twitter limits -Display order options (ascending, descending) -Updates and replies shown chronological order -Option to set number of updates to fetch -Standard Defaults -I suspect all this will require OSX Leopard (Sorry!) -Preferences affect menu looks (shortcuts and # of tweets) v1.0b4 - 1/3/2009 -notify of available Scalare updates through Growl. -request updates through last ID instead of last Date. -added option to get the latest 15 tweets (even if they are old) -bugfix: no tinyUrl conversion if url is short enough (thanks Andy!) -bugfix: No masking needed on images with alpha channel -bugfix: no coreAnimation Layer needed >> less memory used. -bugfix: corrupt images from twitter could make a notification become nonSticky. v1.0b3 - 22/2/2009 -TinyURL support; pasted links are shortened. -Input field allows more than 140 chars, but will cut at 140 when sending. -Input field informs of current tweet length. -Delayed notifications option; new Tweets show one after each other. -Smaller memory and cpu use. -Clicking on a Notification with a link will open that link. -Added tooltips to preferences window. v1.0b2 - 21/2/2009 -Check for Scalare updates. -Tweets made with Scalare should be stated as so. -Don't show duplicated notifications when sending a tweet. -Fetch for updates every 90 seconds instead of 5 minutes. -Due to a possible bug in Growl, you might not get notifications. Please make sure you start Growl from System Preferences. v1.0b1 - 19/2/2009 -First public release. | OPCFW_CODE |
In Hindu mythology Lord Vishnu is said to sleep while floating on the cosmic waters on the serpent Shesha. , Snakes use smell to track their prey. In the temple of Athena in Athens, a snake held in a cage was believed to be the reincarnation of Erichthonius, an early king in ancient Greece. , Terrestrial lateral undulation is the most common mode of terrestrial locomotion for most snake species. Front limbs are nonexistent in all known snakes. Both the Lernaean Hydra and Ladon were slain by Heracles. The Celts associated snakes with wisdom, fertility and immortality, and tended to connect them with healing pools and water. In China and especially in Indochina, the Indian serpent nāga was equated with the lóng or Chinese dragon. Python was the chthonic enemy of Apollo, who slew her and remade her former home his own oracle, the most famous in Classical Greece. Snakes are worshipped as gods even today with many women pouring milk on snake pits (despite snakes' aversion for milk). Brahmins associated naga with Shiva and with Vishnu, who rested on a 100-headed naga coiled around Shiva's neck. , All modern snakes are grouped within the suborder Serpentes in Linnean taxonomy, part of the order Squamata, though their precise placement within squamates remains controversial.. McDiarmid RW, Campbell JA, Touré T. 1999. The god also created a set of twins, the primitive beings, called Nummo. "The snake dance is a prayer to the spirits of the clouds, the thunder and the lightning, that the rain may fall on the growing crops. They generally catch the snakes with the help of a simple stick. "The oldest known snakes from the Middle Jurassic-Lower Cretaceous provide insights on snake evolution". This connection depends in part on the experience that venomous snakes often deliver deadly defensive bites without giving prior notice or warning to their unwitting victims. , The underside is very sensitive to vibration. , Imperial Japan depicted as an evil snake in a WWII propaganda poster, The anthropologist Lynn Isbell has argued that, as primates, the serpent as a symbol of death is built into our unconscious minds because of our evolutionary history. But a younger snake, still growing, may shed up to four times a year. Agreeing to a buy a vaccine once approved is not the same as funding its research and development. Australian Biological Resources Studies, Canberra. In the early centuries AD, the ouroboros was adopted as a symbol by Gnostic Christians and chapter 136 of the Pistis Sophia, an early Gnostic text, describes "a great dragon whose tail is in its mouth". He forgot about the rattlesnake roundups back in Texas", "Okinawa's potent habu sake packs healthy punch, poisonous snake", "蛇酒的泡制与药用(The production and medicinal qualities of snake wine)", "Therapeutic potential of snake venom in cancer therapy: current perspectives", https://en.wikipedia.org/w/index.php?title=Snake&oldid=988774325, Wikipedia indefinitely semi-protected pages, Articles with unsourced statements from June 2016, Wikipedia articles needing clarification from July 2016, Wikipedia articles needing clarification from June 2016, Articles with unsourced statements from April 2018, Articles with unsourced statements from July 2016, Articles needing additional references from April 2017, All articles needing additional references, Creative Commons Attribution-ShareAlike License, A phylogenetic overview of the extant groups. Pp. Commonly known in Hindi as "Ichchhadhari" snakes. Cogger, H 1993 Fauna of Australia. The term Nāga is used to refer to entities that take the form of large snakes in Hinduism and Buddhism. The Irulas are also known to eat some of the snakes they catch and are very useful in rat extermination in the villages. In the Puranas, Shesha is said to hold all the planets of the Universe on his hoods and to constantly sing the glories of Vishnu from all his mouths. , At the other end of the scale, the smallest extant snake is Leptotyphlops carlae, with a length of about 10.4 cm (4.1 in). In African mythology, an ancient god created the sun, moon and thereafter the earth, which he fashioned from a lump of clay. Related: Thorny problems with the serpent being a talking snake. In fighting and killing the snake, the companions of the founder Cadmus all perished – leading to the term "Cadmean victory" (i.e. In medieval alchemy, the ouroboros became a typical western dragon with wings, legs, and a tail.. In Dahomey mythology of Benin in West Africa, the serpent that supports everything on its many coils was named Dan. Most snakes focus by moving the lens back and forth in relation to the retina, while in the other amniote groups, the lens is stretched. 69 51 4. Serpents are connected with venom and medicine. Snake Species of the World: A Taxonomic and Geographic Reference, vol. (eds.). Snakes entwined the staffs both of Hermes (the caduceus) and of Asclepius, where a single snake entwined the rough staff. | OPCFW_CODE |
React JS variable and text field not updating
I am currently using polaris components to build a shopify app. Essentially I want to update a field based on a previous value. I have a connection to Google Firebase which a textbox is set to, and then you are able to edit the text box - so the initial state of the text is what is in the DB. Then you press submit and update the db.
The update is fine and I can get data from the database and load it into the box but when I do so, any time that I use what was retrieved by setting the textbox to the data ( with setEmail(loadedEmail); ) I cannot edit the box. Whenever I delete, type, paste or whatever it reverts back to the form found in the database after a split second. It is as if each time I type into the box, it re-loads the database and whatever it was initially cannot be changed.
This has completely stumped me. This is the current state of the code - to the top is where I think the problem is, and with the 'setEmail' part:
export default function FormOnSubmitExample() {
let loadedEmail = "Enter Email"; //this is only seen if the database cannot find records (becuase it is the first time they have seen the screen)
const db = firebase.firestore();
const docRef = db.collection('example').doc('testuser1');
docRef.get().then(function(doc) {
if (doc.exists) {
console.log("Document data:", doc.data().email);
loadedEmail = doc.data().email; // set the inital textbox to this (but does not work - "Enter Email" from the declaration is seen in the textbox instead, not whatever this is as if it has not worked.)
setEmail(loadedEmail); // When this is not commented out, I cannot edit the textbox and the data from the database cannot be found. Without this command, despite the 'state' variable being set to the same data, the textbox will not update.
} else {
console.log("No such document!");
}
}).catch(function(error) {
console.log("Error getting document:", error);
});
const [email, setEmail] = useState(loadedEmail); // originally just '' as a blank field, so now holds the variable which was entered previously and now stored in db but only if 'setEmail' is used at the top
let updater = email; // this is used because I cannot pass 'email' from the textbox into function otherwise
const handleSubmit = useCallback((_event) => {
setEmail('');
const db = firebase.firestore();
db.collection('example').doc('testuser1').update({
email: updater // this is used because I cannot pass 'email' from the textbox into function otherwise
})
.then(() => alert("Updated Database"))
.catch(() => alert('Something Went wrong'))
}, []);
const handleEmailChange = useCallback((value) => setEmail(value), []);
return (
<Form onSubmit={handleSubmit}>
<FormLayout>
<TextField
value={email}
onChange={handleEmailChange}
label="Email"
type="email"
helpText={
<span>
We’ll use this email address to inform you on future changes to
Polaris.
</span>
}
/>
<Button submit>Submit</Button>
</FormLayout>
</Form>
);
}
I am completely stumped. I am quite new to React but I have quite a few years programming experience in other languages and so I am confused as to why the variable is not updating, but it may be because I am missing something obvious! My guess is that when 'setEmail' is not used, it loads the textbox first and so does alter the display once it has been updated once - allowing me to edit it but not display the db info. When 'setEmail' is used, it updates many times preventing me from changing it, or, for whatever reason, will not let me edit it.
I think it is somehow related to this answer/question -> https://stackoverflow.com/a/55266240/13964958
It's so annoying feeling like I am almost there but I can't both edit it and have the data!
I believe the issue here is that you have your call to the DB in the component which is being called every time a re-render happens. So this is happening:
User changes textarea (handleEmailChange is called)
State is updated, triggering a re-render (due to setEmail(value) )
All your code is run again (including the db call)
Inside the db call you use setEmail again which sets the textarea back to what is inside the DB
So what you could do here is use the useEffect hook and only run on mount to get the initial value.
useEffect(() => { db call }, [])
The empty array at the end lets react know you only want this to run when the component is initially rendered.
Here is a really simple codepen showing what I mean: https://codepen.io/serp002/pen/mdPgYma
You can see that everytime you type something the console log appears, and in your case this is actually a function that is resetting the state back to the value in the DB.
If you throw a console log into your db call, you should be able to see it being called everytime you change the value in the textarea.
| STACK_EXCHANGE |
Building a 'free' AWS site-to-site VPN using OpenVPN and EdgeRouter X
I’ve been trying to upskill myself on AWS, in anticipation of taking on a “Cloud Architect” role with my new employer.
I decided to setup a site-to-site VPN between my Virtual Private Cloud (VPC) in AWS, and my home network. The VPN will allow me to deploy a monitoring host in AWS, and use it to monitor all of the gear on my home network. I’ll also be sending flow and syslog data from devices into an Amazon Elasticsearch instance for analysis and visualisation.
My home router is a NZ$100 Ubiquity EdgeRouter X, which I’ve found to be excellent value for money. I found an article by Ubiquity (from December 2018) on exactly this configuration, so I got to work to start to build it.
I followed along until, while setting up the site-to-site VPN, I checked the AWS site-to-site VPN pricing, and discovered that my little test VPN would cost me US$36/month.
Reasoning that there must be a dirty workaround since VPNs can run in software, I started searching for an OpenVPN solution, and soon found exactly what I hoped for.
OpenVPN have made an AWS AMI available for their “Access Server” product, which is free for up to 2 users. (Provided you’re eligible for the Free tier, which I think means you’ve been signed up for less than 12 months). I installed mine on a t2.micro instance, since I only intend to connect a single client (my ERX), and I don’t forsee doing any major throughput.
I followed the instructions, including:
- Subscribing to the AMI
- Generating a key
- Assigning an Elastic IP to the instance
- Adding a DNS record pointing to the Elastic IP
- SSHing into the instance using the generated key
- Running the “first start” wizard
- Setting my password by running
sudo passwd openvpnas
- Logging into https://<dns-pointing-to-elb>:943/admin/ with user openvpnas and the above password
- Changing the “Hostname or IP address” value under Network Settings to match the DNS record I’d created
I created a new user under “User Permissions”, and I set this user up to …
- Be able to auto-login (necessary for a non-interactive/human client)
- Use routing for access control (172.31.0.0/16 is my entire VPC range)
- Act as a VPN gateway (192.168.29.0/24 is my home IP range)
Although I enabled auto-login, I still needed to setup a password for my “erx” user locally on the access server host, as follows:
openvpnas@openvpnas2:~$ sudo useradd erx openvpnas@openvpnas2:~$ sudo passwd erx Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully openvpnas@openvpnas2:~$
Then I logged into https://<my elastic ip>:943 as the “erx” user (password set above), and downloaded the auto-login config, as a single .ovpn file.
I SCP’d the .ovpn file to /config/openvpn/ (I had to make this directory) on the ERX, and then applied the following in configure mode:
set interfaces openvpn vtun0 mode client set interfaces openvpn vtun0 config-file /config/openvpn/awsvpn.ovpn
I watched the logs, by running
run show log tail, and saw my first attempt resulted in errors like this:
TLS_ERROR: BIO read tls_read_plaintext error: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
To make OpenVPN play nicely with the ERX, I also had to change the TLS option under Configuration -> TLS Settings from the default of “TLS 1.2” back to “TLS 1.0”, for reasons explained here.
After changing the TLS setting, the VPN established on both sides, and my openvpn access server can ping a linux box at home behind the ERX :)
openvpnas@openvpnas2:~$ ping 192.168.29.3 PING 192.168.29.3 (192.168.29.3) 56(84) bytes of data. 64 bytes from 192.168.29.3: icmp_seq=1 ttl=63 time=44.0 ms 64 bytes from 192.168.29.3: icmp_seq=2 ttl=63 time=44.2 ms 64 bytes from 192.168.29.3: icmp_seq=3 ttl=63 time=44.3 ms ^C --- 192.168.29.3 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 44.022/44.200/44.359/0.279 ms openvpnas@openvpnas2:~$
I haven’t yet made the routing table changes in the VPC to forward traffic from my instances to my home network via the OpenVPN, but that’ll be the next step! | OPCFW_CODE |
iOS 8: Learning Swift
Apple developers pride themselves on thinking differently. This isn’t always easy though, especially if it requires learning a new language that’s still in beta. What can help us stay (or get back) on track so we’re ready to begin using Swift in the future?
The hardest part of learning a new programming language isn’t the lack of documentation or learning resources, it’s maintaining the motivation needed to move forward and keep learning.
As you begin learning Swift, try to identify the things you want get out of the experience. Learning a new language for the sake of learning, or just because it’s new, aren’t good enough reasons to stay motivated. With so much information available already, it can be difficult to focus on what you want. Don’t get hung up on the syntax, or what other people have already learned. In the long run, it’s not the language that matters, it’s what you do with it. Explore it, but make it work for you. Don’t attribute to it more than it deserves, but also be willing to accept its limitations. In the end, Swift is only a collection of techniques used to do interesting things.
Stop reading, start programming
Don’t feel as though you need to wait until you’ve finished the Swift Programming Book or have watched all the WWDC videos to begin using it. By starting on small projects, you will begin to feed your curiosity. It’s important not to know everything before you begin. All the bits you don’t know will refuel the curiosity you’ll need to further dive into the language.
As you learn, you’ll experience different levels of success. The small milestone of being able to compile your first app, even if your code is horribly disfigured, should be something to be proud of.
By using Swift today, you’ll learn as you code. Small projects let you aim for achievable results. This is much more useful than learning the syntax now and then waiting for an opportunity to use it “someday”. Be specific and realistic about what you want to do and when you want to do it by. Nobody is perfect. There will always be some element you can improve on - accept this and just get started.
Focus on the things you want to learn about. Diving into the things you’re passionate about first will make other aspects of this new language much easier to learn when you come back to them.
Focus on paradigms over syntax. Learn about the ways Swift encourages you to think about problems. What core features make it like other languages? What makes it unique?
Change things up
If you find the Apple docs, or a series of blog posts aren’t helping you maintain your interest, perhaps a non-linear approach to learning would work for you. Don’t be afraid to chase tangents. As with learning anything new, you’ll eventually hit a wall. Don’t panic, this is normal. This is usually our brains telling us to try a different angle, or to step back and take a break.
Learn with others
Most of us learn better when we have feedback on what we’re learning about and on how well we’ve learned it. If, in a month, you’re having a difficult time sticking with Swift, try to find some other people who are learning it as well. They could be online, or part of a local group like Cocoaheads. Learning with others can give us the extra bit of motivation we need to keep going.
As you grow comfortable with the language, your learning approach will bend and flex based on your interests and the foundation of knowledge you’ve built. Try to keep things fresh. Make the language work for you, and look for opportunities to think differently. If you do, you’ll see results now and in the future. | OPCFW_CODE |
A good Web Browser and File Manager can save you a lot of time. Today, I evaluated new programs for Windows in order to get a little bit more efficient.
I have dumped Internet Explorer (IE) the moment I installed Windows XP. Its user interface is really out-dated, so no more comments on this. The rendering engine, however, seems really fast, although from the view of a Web developer I nightmare to support.
My all-time favorite is the commercial NetCaptor (v7.5.1). It has the IE engine under the hood but provides very useful features, such as tabbed browsing, bookmark groups (called CaptorGroups), mouse gestures, URL aliases (e.g. “g” for google search; the search terms are inserted into the URL through the “%s” parameter in the alias definition). OK, this is now also all available in some other browsers (Opera, but not Mozilla and Firefox). But still, NetCaptor has it all simply done right – just the way I like it. And it has a lot of small, useful features everywhere you look. For example, you can
- move tabs to other positions
- have tabs in multiple lines
- close them by double clicking
- have them opened next to the current tab
Next comes Mozilla 1.7. I actually used only the mail client for IMAP. (I have dumped my beloved TheBat! a year ago since it didn’t support IMAP nicely then; now it does, but too late for me.) Compared with NetCaptor, Mozilla felt heavy-weight and missed a lot of my beloved features out-of-the-box. Thus in daily use I only start the web browser to verify sites I developed.
Two weeks ago I also dumped the Mozilla mail client and switched to Mozilla Thunderbird. It’s actually nearly the same, but differs in some details. (Tip: You can also copy your Mozilla profile, see the Mozilla FAQ.) But most importantly, it does not have the annoying bug I found in Mozilla mail: nested IMAP folders (2+ levels) are not displayed. Thunderbird loads quickly and is really nice and slick. One feature I miss, though, (and what TheBat! always had) is to minimize it into the desktop tray. Another missing one is support for multiple signatures (e.g. depending on recipient properties), or even better, parameterized mail templates.
Mozilla Firefox 0.9 was released lately, so I gave it a try. Is it better than NetCaptor? Well, feature-wise they both are nearly equal, but I had to install and configure a lot of Extensions:
Additionally, the “Web Developer” extension is a must-have for all, well, Web developers. I also installed a new theme “Noia 2.0 (eXtreme)” and dumped the boring standard theme. But still, Firefox takes a little more time to start, and the Web search is simply not as good, since you have to select the engine manually. Compare this to NetCaptor (and Opera etc): here you simply type e.g. “g mozilla” in the location bar and voila. Another thing I don’t like is the slow speed of mouse wheel motion (press middle button and move up- or downwards). It takes seconds to get to the top or bottom. (I have searched but it seems there is no possibility to jump there by mouse gesture.)
I also installed Opera 7.5.1. It’s really good looking, has a great user interface and is very slick. I think it is actually equal to NetCaptor, out-of-the-box (ignoring additional tools like the download manager). Being equal I think I’ll stick to NetCaptor for a while and gradually move over to Opera. (One sidenote: the IMAP client in Opera is bad and misses a lot of features that Thunderbird/Mozilla etc had for years, so again, I would use only part of the software.)
Windows Explorer is what I used before. It’s OK, sometimes I wished for a better navigation, mouse gestures etc. So, yesterday I installed and tried a lot of the alternatives (free and commercial) including all *Commander (* = EF, AB, Speed, Total), ExplorerPlus etc.
The result: I’ll stick to Windows Explorer. The reason? All of the alternatives are either simply bloated, offer a stupid and fixed two-window layout, have no “undo” for file operations (whow, how can this be ignored?), are high-priced or crash here and then. Two explorer alternatives were really nice and I’ll watch their future development:
- FileAnt is a minimalistic explorer. What is really nice is that you can have two folder views side by side with each side having as many tabs as you like (as in in tabbed browsing, but folders instead of pages). The third column is the desktop hierarchy. FileAnt has a minimal editor and multimedia preview. And also offers fast navigations through several means (right click, click in blank area etc.) But… it’s shareware, has no “undo” for file operations and a very annoying bug: playing a movie with the buildin player can only be stopped by quitting FileAnt!
- xplorer2 is also very slick, and very powerful w.r.t. to file operations, selections and searching. The downside is the price of $19 for the pro edition. I would buy it if it would offer an “undo” for file operations.
I really don’t understand why “undo” is missing in nearly every explorer product. This is a lot more useful than a preview or a rename tool!!!!! | OPCFW_CODE |
I’ve been working (my ass off and compromising my job and social life) on this game called Fez for a couple of months, and 5 hours from the Independent Games Festival entry submission deadline, it’s finally OVER!
Well, the demo. But it’s a full level, with dialogues, collectibles, sound, music and a pretty full demonstration of the game’s concept… which I can’t show too much of right now, especially the concept and what’s original about it, but here’s some in-game material that I can release. It’s a screenshot, taken directly from the game, no mocking-up here.
I’m really proud of the end result, especially how much it evolved in like, two weeks. The game was basically created in a one-month sprint, the 4-5 months before that were slow engine structure development, XNA exploration, error & trial and rewriting… But in the last month, it really took shape as a game. And it’s something I’ve never done before, an actual game. It feels… so much more constructive than anything I’ve done before.
Once the secrecy of it all disappears (so when the IGF entries have been announced), I’ll post a series of dated shots that demonstrate how much the game has grown in such little time. It’s amazing, and it’s been exhausting…
Because of err… technical limitations, it won’t be available right away for everyone. I didn’t have the time to make a codepath for older hardware so it runs only on SM3.0-compatible video cards, and was only tested on nVidia hardware. But I expect this other codepath will come in the next months.
8 thoughts on “Fez is finally ready.”
Looks Great Zak! I’ve been wondering what you were working on (because you haven’t been posting much in the usual places lately). Good luck at the IGF, can’t wait to see more of the game.
Platformers are the best, I’m also making one in 3d with my friends.
Looks excellent and simple!
Don’t know what’s IGF but I wish you good luck!
Since this is your latest post I am going to ask this question here and hope you will see it. I cant compile a single sample demo. I have both tv3dsk 6.3 and 6.5 installed. I also have visual studio C# 2005 express and XNA installed as well. Whenever I try to compile any of your samples in vs2005 it stops and and error message pops up that says vs cannot start debugging becasue the debug target binDebugwhateversample.exe is missing. Please build the project and retry, or set the output path and assembly name properties appropriately to point at the correct location for the target assembly. By the way I put whateversample.exe because it says the same thing for whatever the project is called. I have tried different ways to change the path of the file and still nothing . Can you please tell me what Is wrong and how I can fix this?
@JayJay : I have not heard this one before… so I can’t really tell what’s wrong here. Make sure you remove the C++ project from the solution if there is one, and the the MTV3D65 reference is OK, you have to reset it to your DLL path. Otherwise… try to delete all the output files and do a full rebuild… I don’t really know, sorry!
I figured it out after all. I just forgot to add the tv3d media and engine references to VB! It works know but thanks for trying to help!
Wow!! This looks brilliant! :D
What a fantastic idea to render from orthographic views and to flatten out the depth in your collision, and be able to switch between different axis-aligned views! Excellent! :D
Thats a really clever Idea , why didn’t I think of it .Its actually a higher dimension universe. I’ll have think about how many.
Really cool that game :) | OPCFW_CODE |
It's been almost a week ago already that we indulged ourselves with a 48 hour period of full-on NoSQLness. About time to report on that, no?
It all started last Sunday meeting up with Lars George, self-proclaimed EU HBase ambassador to talk shop about HBase and our decision to use HBase as the underlying foundation of our next-generation content store. After sharing a couple of beers and lamenting the rather silent state of the NoSQL movement in Europe, we felt we could be part of the solution rather than the problem so there's a good chance we'll try and organize a NoSQL meetup somewhere in Spring of next year, hopefully being able to share some more in-depth war stories from our own experiences.
After beers and a good night of rest, we spent a good part of the day discussing HBase and Lars' own experiences with it, and I must confess being impressed with Lars' gut to go down the HBase route at a pretty premature stage of its infancy - good to hear it has mostly been living up to its promises so far. Here are some meeting notes from our meeting (application/pdf, 11.9 kB, info) - verbatim.
In the late afternoon I was expected in Antwerp for my Devoxx "Tools in Action" talk on NoSQL (with some focus on HBase). It went OK, though it's hard to fit any kind of coherent story into a 30' talk (and my time management is obviously pretty bad). I gave the audience (about 200 people) a very short overview of our reasoning behind moving from My- to NoSQL for our CMS platform, presented "the classics" (CAP and BASE) and then gave a very short HBase intro to finish. I got some questions afterwards as well, which in my mind translates as "people found it interesting enough to want more". Oh, and I only went 5' behind schedule! :-)
After the talk, I went down to the BOF rooms to be greeted by Lars and my Outerthinkers setting up the BOF room. Evert learned us about the fishbowl discussion technique, which was pretty interesting and a cool way to "warm up" a room full of techies to actually have an interactive group discussion between strangers. I would lie while understating the importance of Lars' presence during the BOF, as the planned generic NoSQL theme quickly converged around HBase-only chatter, however a lot of the common challenges, problems and design constraints are pretty similar among the different NoSQL solutions.
We didn't mind the BOF becoming HBase-centric, if only that it showed that next year, Devoxx should accomodate a full NoSQL track (or set of BOFs) given the current interest in the subject. With 50 people attending a late-evening BOF session, I'm sure there's interest for more!
Related, don't forget to check out the Xebia blog report on NoSQL/Devoxx as well! | OPCFW_CODE |
Foundational texts of Shatdarshanas
The six orthodox schools are called as shatdarshanas and include Nyaya, Sankhya, Yoga, Vaisheshika, Purva Mimamsa, and Uttara Mimamsa (Vedanta Philosophy). Most of these schools of thought believe in the theory of Karma and rebirth.
What are the principal texts of each school of philosophy and interpretation here?
(Just like Upanishads and Vedas for Vedanta) - Their foundational texts?
As far as I know, I can recall only Yoga Sutras of Patanjali in Sankhya school of philosophy as a fundamental text.
The 6 orthodox (astika) schools of Indian philosophy and their main texts are:
Samkhya:
The key text is the Samkhyakarika by Ishvarakrishna.
Yoga:
The main texts are the Yogasutras by Patanjali and the Yogabhashya.
Nyaya:
The Nyayasutra by Aksapada Gautama and Nyayabhashya by Vatsyayana are the foundational texts.
Vaisheshika:
The Vaisheshika Sutra by Kanada contains the concepts.
Mimamsa:
The Purvamimamsa Sutras by Jaimini and Shabarabhashya are the key texts.
Vedanta:
There are 3 main Vedanta schools - Advaita (key text is Brahmasutra Bhashya by Adi Shankaracharya), Vishishtadvaita (Brahmasutra Bhashya by Ramanujacharya) and Dvaita (Brahmasutra Bhashya by Madhvacharya).
The main sutra texts summarize the core philosophical concepts, which are explained in detail through the bhashya commentaries on them by later authors.
Samkhyakarika is astika? From what I've read - the text is purely nastika (doesn't affirm existence of the god), neither ishvara being taken as supreme?
@AbhasKumarSinha Samkhya categorically rejects the existence of God. But it accepts the authority of the Vedas and the existence of souls and devas (Devas are not Ishvara, the creator; Devas are created beings).
@AmritenduMukhopadhyay Thank you so much. I'm a bit confused in case if you can help me. The answer here: https://hinduism.stackexchange.com/a/21976/29449 The first argument, Yoga seems to talk about brahman as god and Nyaya seems to talk about Ishvara (one who enforces the laws of karma) and they both seem different under the same heading of god? Right?
@AbhasKumarSinha The definition of God in the Yoga philosophy is the most interesting one. It is unique too. Yoga is based on Samkhya, so it accepts the entire doctrine of Samkhya. It is like an appendix to Samkhya. It added a few new things to the already existing structure of Samkhya. As Samkhya does not have a creator (the non-living Prakriti is the source of creation according to Samkhya. No intelligent being is involved.) Yoga also accepts that. According to Samkhya apart from Prakriti, there are many Purusha (consciousness). Some are Jivas born on this plane of existence. Some are Devas.
@AbhasKumarSinha Yoga says there is one Purusha who has infinite knowledge and it never gets entangled in the Prakriti. That is God. So the Yoga God is not Brahman and he is not the creator. Different Purushas have different amounts of knowledge. The "God" is infinitely knowledgeable. So much knowledgeable that he never gets entangled in this material world like us (partially knowledgable Purushas). You will find the definition of God in Yogasutra 1.25 or 1.24 if I am not wrong. The symbol of this God is Om. That is also found in another sutra. Probably the next one.
@AbhasKumarSinha The Vedantic worldviews and Samkhya-Yoga worldviews are quite different. According to Vedanta, the world is an illusion and Brahman is real. According to Samkhya-Yoga, the world is real, and the intelligent creator is an illusion (does not exist).
@AmritenduMukhopadhyay That makes sense. Thank you for the answer. But who is the enforcer of karmas here now? I believe the second god, ishvara is the one who enforces the laws of karma in Advaita Vedanta. My best guess is that they don't believe in the ishvara, or the one who enforces laws of the karma.
@AbhasKumarSinha that is an exciting question. I was also thinking about it. I have not encountered any verse in the Samkhya-Yoga system that directly talks about karma so far. You might write it as a question.
| STACK_EXCHANGE |
🤖 The Free Stable Diffusion Prompt Generator
Do you know what's even better than a super-intelligent text-to-image model that creates realistic images? A free tool that generates prompts for Stable Diffusion so you don't have to sweat it out!
You'll love these Stable Diffusion Prompts
Don't believe me? Check for yourself
With the rise of text-to-image technologies and automated image generation, the possibilities to generate images with just a few words are endless; thanks to technologies like Midourney and Stable Diffusion.
But not everyone is good with words, and this is where learning prompt engineering comes in. But for those who don't have time to invest in learning prompt engineering, there's a stable diffusion prompt generator that is trained with the best practices in prompt engineering.
This stable diffusion prompt generator uses advanced natural language processing (NLP) algorithms to analyse your text and suggest the best keywords for generating images. It then gives you a stable diffusion prompt containing those words so that you can generate an image quickly and accurately.
But if you still want to learn Stable Diffusion Prompting, here's a guide for you -
What Is Text-To-Image Technology?
Text-to-image (or image synthesis) is a technology that uses text as input to generate images. It does this by using natural language processing techniques to understand the meaning of the words and then generate an image based on that understanding.
Several companies like Midjourney, Stable Diffusion, and Dall-E (by OpenAI) use the diffusion approach to generate images. This approach uses a "prompt" composed of keywords and phrases - generated by the user - to accurately describe an image that will be generated.
By diffusion, we mean that an image is created by applying a series of transformations to a base image. This will generate visually similar but different images, which, depending on the prompt, may look like something completely different from the original base image. For example, a prompt can be "a strawberry in the snow" and you will get an image that looks like a strawberry covered by snowflakes.
What Is a Stable Diffusion Prompt?
A stable diffusion prompt is a set of keywords and phrases that describes an image. This set of keywords is used as input for image synthesis algorithms to generate high-quality, realistic-looking images.
These prompt are detailed, specific, and usually include attributes like -
- Additional details
An example of a stable diffusion prompt is "a pink rose on a white background with delicate petals". A better prompt would be -
"A pink rose with delicate, round petals on a white background in the style of Monet, by the Nikita Kravchenko"
How To Generate A Stable Diffusion Prompt?
Generating a stable diffusion prompt is relatively straightforward. All you need to do is provide some text as input, and the generator will suggest a series of words and phrases that it believes are best for generating an image.
For example, if you provide "a strawberry in the snow" as input, the generator might add keywords such as "snowfall", "cold", or "berry".
Once you have chosen the right keywords, all you need to do is put them all together to form a prompt. This is the most important part, since this is what will be used as input to generate your image.
Also, ensure you add a negative prompt to your prompt, just in case you need it. This means adding words that describe what you don't want the image to look like. For example, you might add "not red" if you don't want your strawberry to be red.
How Does A Stable Diffusion Prompt Generator Work?
A stable diffusion prompt generator is an AI-based program that uses natural language processing algorithms to suggest a set of words and phrases for image synthesis.
The generator will take in some text as input, then use machine learning algorithms to analyse the text and generate a list of keywords that it believes are best for generating an image according to the input text.
The generator will also suggest a negative prompt, which helps reduce the chances of undesirable images coming out. For example, if you input "a strawberry in the snow", then the generator may suggest adding words like "not red" or "no leaves".
Once all the words and phrases are chosen, they can be combined to form a stable diffusion prompt. This prompt can then be used as input to generate a high-quality image with the help of Stable Diffusion.
Get 3 New Researched Prompts Every Wednesday
Along with other AI for non-techies news. | OPCFW_CODE |
June 25th, 2012, 08:17 AM
Backward button to go back to form from opened .exe program.
I am currently working on a program that has several buttons in it lined up. Each button goes to different applications (.exe programs). The program I am making is suppose to be a fullscreen program and won't let you see any of Windows when it's running. Therefore I want to make a "back" button that always stays on top of any application and when I push it it will take me back to my WindowsForm (startpage of the program). Any ideas?
I am also wondering if you can make it so all .exe programs only can run ONCE. So if I press one of the buttons saying "open notepad" I want it to work so it can't open Notepad once again making it two notepads open. Just go back to the first Notepad opened. But if I close notepad I wanna be able to open it again trough one of my buttons.
Thank you for your time! // Kevin.
June 25th, 2012, 10:27 AM
You're going to need a lot of API to pull off all your requirements. I frankly don't know where to begin telling you to begin, except suggesting that a lower level language (C based) would be the better tool for this job.
“Today you are You, that is truer than true. There is no one alive who is Youer than You.” - Dr. Seuss
June 25th, 2012, 11:35 AM
I am not sure what you mean by the first part, but to ensure that an application only starts once, I use the following:
Code above is for one form (frmTest) and one module (module1). Notepad is not a very good example, because each Notepad window can have a different title depending on the file it is editing. But the example above will activate an "Untitled - Notepad" window and switch focus to it. If it already exits, focus is switched to it. If it is minimized, it is restored.
Declare Function FindWindow Lib "user32" Alias "FindWindowA" (ByVal lpClassName As String, ByVal lpWindowName As String) As Long
Declare Function ShowWindow Lib "user32" (ByVal hWnd As Long, ByVal nCmdShow As Long) As Long
Declare Function GetForegroundWindow Lib "user32" () As Long
Declare Function SetForegroundWindow Lib "user32" (ByVal hWnd As Long) As Long
Declare Function GetCurrentProcessId Lib "kernel32" () As Long
Declare Function WaitForInputIdle Lib "user32" (ByVal hProcess As Long, ByVal dwMilliseconds As Long) As Long
Declare Function GetWindowThreadProcessId Lib "user32" (ByVal hWnd As Long, lpdwProcessId As Long) As Long
Declare Function GetCurrentThreadId Lib "kernel32" () As Long
Declare Function AttachThreadInput Lib "user32" (ByVal idAttach As Long, ByVal idAttachTo As Long, ByVal fAttach As Long) As Long
Declare Function GetDesktopWindow Lib "user32" () As Long
Declare Function GetWindow Lib "user32" (ByVal hWnd As Long, ByVal wCmd As Long) As Long
Declare Function GetParent Lib "user32" (ByVal hWnd As Long) As Long
Private Sub cmdNotePad_Click()
Dim Title$, A$
Dim TaskID As Long
Dim ErrorCode As Long
On Error GoTo NotePadActErr
Title$ = "Untitled - Notepad"
TaskID = CheckUnique(Title$, 0)
If TaskID = 0 Then
' FlashBox.Show 0
Screen.MousePointer = 11
A$ = "\windows\system32" + "\NOTEPAD.EXE"
If Len(Command$) > 0 Then
A$ = A$ + " " + Command$
TaskID = Shell(A$, 1)
Screen.MousePointer = 0
ElseIf TaskID < 0 Then
MsgBox "Program " + Title$ + "is active but not able to take focus.", 64
ErrorCode = Err
If ErrorCode = 53 Then
MsgBox "NotePad.exe Program could not be found!", 16
Screen.MousePointer = 0
' Unload FlashBox
' Call LogError("^" + Str$(Err) + " Activate Acct")
' Call FatalError(ErrorCode)
Function CheckUnique(ByVal FormName As String, hIgnore As Long) As Long
'FormName is the caption of the desired form, hIgnore is the
'window handle of the parent form to be ignored if already running.
Dim hWnd As Long
Dim ShowW As Long, SetF As Long, Pid As Long
CheckUnique = 0
hWnd = FindWindow(vbNullString, FormName)
If hWnd = 0 Then
ShowW = ShowWindow(hWnd, 9) 'Restore it in case it is minimized
ShowW = hWnd 'Save original handle
SetF = GetForegroundWindow() 'Does not always return current app
If hIgnore = frmTest.hWnd Then
SetF = SetForegroundWindow(hWnd)
SetF = GetCurrentProcessId()
SetF = WaitForInputIdle(SetF, 10000)
hWnd = GetForegroundWindow()
hWnd = SetF
SetF = GetOwnedWindow(hWnd) 'Get owned top level window
If SetF = 0 Then 'No owned Windows found
CheckUnique = hWnd
ShowW = SetForegroundWindow(ShowW) 'SetFocus to FormName
CheckUnique = -SetF 'return neg handle for owned window
SetF = GetWindowThreadProcessId(SetF, Pid)
SetF = AttachThreadInput(SetF, GetCurrentThreadId(), Pid)
Private Function GetOwnedWindow(hWnd As Long) As Long
Dim OwnedHandle As Long
OwnedHandle = GetDesktopWindow() 'get the desktop handle
OwnedHandle = GetWindow(OwnedHandle, 5) ' get first top level window
If GetParent(OwnedHandle) = hWnd Then
GetOwnedWindow = OwnedHandle
OwnedHandle = GetWindow(OwnedHandle, 2) 'get next top level window
Loop Until OwnedHandle = 0
October 10th, 2012, 03:32 AM
For a non internal windows program
Ok! Thx man this is great really. But how would you do this example if there was a program called: trombmobil.exe and was located in C://Program Files/Tromb 3.1/trombmobil.exe
Originally Posted by couttsj
This also a program that I only want to be able to open once, I should have this fixed, but there is something I am doing wrong, any help would be appreciated. Thx in advance.
October 10th, 2012, 09:36 AM
The name of the executable doesn't enter into it; it is the name in the title bar or form Caption that is used to determine if a particular window is loaded or not.
Originally Posted by Kevax
October 12th, 2012, 03:28 AM
Ok, now I get it! Thank you so much for your time
Originally Posted by couttsj | OPCFW_CODE |
feature request: local cache store for self-hosted runners
because i, as well as many others, are having issues with the caching service, a nice feature for self-hosted runners would be to have a cache local to the self-hosted runner, perhaps configurable by the self-hosted runner.
currently for me, it's faster to not cache and just run npm ci than to load/save cache on self-hosted runners.
This will be very useful! We don't have the best connection to the backend store, it is usually faster to just download all the things again.
👋 Hey @jonathanong, could you describe your feature request a bit more?
Most tool ecosystems already have a local cache, this action's purpose is to save that local cache in a central location to share between runners. For a self-hosted runner, running workflows will naturally populate those local caches.
I am also interested in this feature and can explain my use case @joshmgross.
I use the cache action not just for node_modules or toolchain installed deps but for expensive compilation tasks internal to our app as well. It'd be nice to not have to build out a bespoke cache system to run on the self-hosted runner and instead leverage the existing caching logic of this action to store it locally instead of having to redownload these large binaries over and over.
example action steps we use
- name: Get Image CLI version
id: image-cli-version
run: node ./log-image-cli-version.js
- name: Cache Image CLI
id: cache-image-cli
uses<EMAIL_ADDRESS> with:
path: built-dependencies/image-cli
key: ${{ runner.os }}-image-cli-${{ steps.image-cli-version.outputs.version }}
- run: bash ./compile-expensive-image-cli.sh
if: steps.cache-image-cli.outputs.cache-hit != 'true'
It'd be nice to not have to build out a bespoke cache system to run on the self-hosted runner
I think that's the main point. I could set up my own caching system, but then it would be a custom action. Ideally, it's the same actions API, but with a different "backend".
Another feature I'd like is to avoid slow cache uploads/downloads, which are much slower if your self-hosted runner doesn't have great internet speeds, especially upload speeds.
this action's purpose is to save that local cache in a central location to share between runners.
This local cache can be on the local self-hosted runner and could be shared by other self-hosted runners on the same machine.
Thank you all for your feedback. At this time there are no plans to make a local caching service, as that functionality exists in some form with each tool's cache. The logic around matching cache keys and branch scopes is all handled by the internal cache service and would have to be duplicated into the action to support any local cache functionality.
I would love to see this as well, primarily for node_modules via yarn. We're using a farm of self-hosted runners and using actions/cache is the same, if not slower, than just installing all the dependencies again from a blank slate, depending on the utilization of the network, etc.
We have to use our own tasks/steps to essentially do what actions/cache does but with a local directory. Just would be nice to not have to re-invent the wheel here and, instead, specify a cache "location" or "path" to use instead of using the remote cache on GitHub/Azure.
Here's a quick benchmark to show you the performance hit self-hosted runners have with using the default remote cache:
Remote Cache Using github/actions.
2m48s
And that's WITH a cache hit.
Local Cache.
Cache Miss
1m42s
Cache Hit
~0m01s
Anyway, just would be nice to not have to do this manually and get all the nice features of actions/cache for self-hosted runners.
Hi,
I just want to highlight here, too, that this would be VERY helpful.
The online cache always needs a strong internet connection. But not only that, it seems that Github itself limits the bandwidth tremendously. I usually have >10MB/s download speed at ease. But from GitHub it is significantly longer. What's more, it seems that there is some "ramp up" time needed.
All of this would be WAY shorter by just having a local cache.
There are GitHub Actions doing that but I would rather have this done by the "official" cache tool. It would be VERY powerful that way. Please :)
Also chiming in here with agreement that the cache system should be pluggable such that alternative stores can be configured for self-hosted runners, allowing the existing actions to work using different persistence mechanisms.
| GITHUB_ARCHIVE |
What does `sudo tail -f /var/log/auth.log ` mean?
I learn the course Full Stack for Frontend Engineers on frontendmaster.
I use a Digital Ocean server. I disabled root access by setting PermitRootLogin no and added my public key to the authorized_key file so I can log in.
then:
sudo tail -f /var/log/auth.log
Oct 7 08:42:17 ubuntu-512mb-sgp1-01-fem-young sshd[16857]: Invalid user user from <IP_ADDRESS>
Oct 7 08:42:17 ubuntu-512mb-sgp1-01-fem-young sshd[16857]: input_userauth_request: invalid user user [preauth]
Oct 7 08:42:17 ubuntu-512mb-sgp1-01-fem-young sshd[16857]: Connection closed by <IP_ADDRESS> port 58905 [preauth]
Oct 7 08:42:23 ubuntu-512mb-sgp1-01-fem-young sshd[16859]: Invalid user ubnt from <IP_ADDRESS>
Oct 7 08:42:23 ubuntu-512mb-sgp1-01-fem-young sshd[16859]: input_userauth_request: invalid user ubnt [preauth]
Oct 7 08:42:23 ubuntu-512mb-sgp1-01-fem-young sshd[16859]: Connection closed by <IP_ADDRESS> port 59157 [preauth]
Oct 7 08:42:26 ubuntu-512mb-sgp1-01-fem-young sshd[16861]: Connection closed by <IP_ADDRESS> port 59446 [preauth]
Oct 7 08:42:31 ubuntu-512mb-sgp1-01-fem-young sshd[16863]: Invalid user admin from <IP_ADDRESS>
Oct 7 08:42:31 ubuntu-512mb-sgp1-01-fem-young sshd[16863]: input_userauth_request: invalid user admin [preauth]
Oct 7 08:42:32 ubuntu-512mb-sgp1-01-fem-young sshd[16863]: Connection closed by <IP_ADDRESS> port 59670 [preauth]
Oct 7 08:42:33 ubuntu-512mb-sgp1-01-fem-young sshd[16865]: Invalid user support from <IP_ADDRESS>
Oct 7 08:42:33 ubuntu-512mb-sgp1-01-fem-young sshd[16865]: input_userauth_request: invalid user support [preauth]
Oct 7 08:42:34 ubuntu-512mb-sgp1-01-fem-young sshd[16865]: Connection closed by <IP_ADDRESS> port 59872 [preauth]
Oct 7 08:42:39 ubuntu-512mb-sgp1-01-fem-young sshd[16867]: Invalid user admin from <IP_ADDRESS>
Oct 7 08:42:39 ubuntu-512mb-sgp1-01-fem-young sshd[16867]: input_userauth_request: invalid user admin [preauth]
Oct 7 08:42:40 ubuntu-512mb-sgp1-01-fem-young sshd[16867]: Connection closed by <IP_ADDRESS> port 59944 [preauth]
Does this mean I was hacked? If so what can I do to protect myself?
Stack Overflow is a site for programming and development questions. This question appears to be off-topic because it is not about programming or development. See What topics can I ask about here in the Help Center. Perhaps Super User or Unix & Linux Stack Exchange would be a better place to ask.
The sudo command let end user to act like root user.
To disable sudo access you need to edit your /etc/sudoers file.
Example
Following entry in sudoers file let test user to execute any command , as any user from any terminal.
test ALL=(ALL) ALL
Disable sudo for test user you may comment above entry from /etc/sudoers.
| STACK_EXCHANGE |
Happy Monday, everyone! :D Have some cuteness
In a good mood today, as I had a good day yesterday. Lots of TV, lots of chores, some reading and some hacking. Bot's quote database is now online again. *bounces* I had forgotten to reinstall MySQL when I restarted bot back up.
So, TV. I watched Hitchhiker's Guide to the Galaxy
, which means I am walking around humming "so long so long so long so long" over and over. Which is why I am going to go to Walgreens during lunch to get more batteries for my Rio, so i can get the song out of my mind. Good movie, though. Still don't really like Zaphod's characterization in the movie, and Alan Rickman's voice as Marvin tends to distract me, but I like everyone else. LOVE Trillian. She is teh awesome.
Next up: first disc of Invader Zim
. It's been in my queue for-freaking-ever, and for some reason I kept bumping it down, and down, and down... Finally decided I might as well rent it. I didn't finish the disc, because it bored me. But it might be one of those things that grows on me and is awesome in retrospect, like Monty Python
. I was "meh" about Monty Python when I saw it, but then would crack up laughing anytime anybody quoted it. Since I now hear Zim saying "GIR!" in my mind, maybe it will be the same. I took the rest of the discs off my queue for now, but who knows, maybe I'll add them back.
Finally: Star Trek: The Next Generation
, season one disc one. *snerks* The Signal
was so right when they called ST:TNG's Enterprise
"a flying office building". I couldn't help laughing at how plush
everything was, and how everything is controlled with BUTTONS instead of, like, controllers or joysticks. It's rather unbelievable. But ST:TNG was one of the first shows i ever got fanatic about, and it was like seeing an old friend again. Yeah, there's cheesiness and snerkiness, but I still love these characters, and there's a lot of interesting ideas in their plots. So it'll be fun watching it all again. Looking forward to seeing the Borg, and Guinan, and more Q.
Wait, not quite finally. I also rewatched an ep of Battlestar Galactica
. Getting itchy for second season. Is it silly of me that I'm kind of :( about having the UK edition of the first season because that means the boxes won't match when I buy the second season? *snerks at self* I have to pay some more bills this month, but once they are paid I am buying the BSG soundtrack. AND the Firefly one, too, which I believe comes out this week! *dances* But I'm really looking forward to Season Two of BSG, which comes out on December 20th--NEPHEW DAY!!!
*bounces about like a spazz* I am SO. FREAKING. EXCITED. about the two upcoming kidlets. I cannot wait
to meet them. Not too much longer until my nephew arrives, yay! And while I can't wait to meet my new neice, either, I'm hoping she stays put as long as possible. So, January sometime, then. That's still not that far.
I tried to sign up for the Netflix settlement but it won't work from here. I'll have to try from home, later. I was thinking about bumping my membership level up a notch or two anyway, so hey, this gives me a chance to try it for a month FREE!
Not gonna finish the book by tomorrow. Ah well. Still going to buy the new book tomorrow, though. Which means I will likely see the crush on the bus! :D :D :D
Getting Chipotle for lunch, yay! Mmm... Chipotle. | OPCFW_CODE |
Hi, after install fc 37 rc, but I got the same problem also with 36, on booting I have error on tpm and fedora doesn’t start
Press e for edit grub and add:
in this way I may boot in order to add to /boot/grub2/grub.cfg following line:
in this way I may reboot without any problem, but if I update grub I return to 1st point.
is there any solution for fix permanent this?
At the moment I’m using Endeavours as only system, I would like to switch to Fedora but not at this conditions
My notebook is Asus VD753 (I think) and in bios there isn’t any reference on how disable tpm and I can’t update bios, secure boot is disabled.
I hope there’s a solution.
I don’t know about the tpm and bios. Most UEFI bios I have looked at have the option under the security tab to disable tpm but of course I cannot see yours to know.
I did a quick search on how to prevent kernel modules from loading and found several links.
Blacklisting modules on Linux | Network World.
fedora - Unable to disable kernel module - Unix & Linux Stack Exchange
Based on those and on what I already knew you should be able to do
sudo echo "blacklist tpm" > /etc/modprobe.d/blacklist-tpm.conf then reboot.
It may require that you also run
sudo dracut --force to recreate the initramfs so the entry is available in the initrd image for the boot before the root file system is mounted.
Thank you for your answer, I’ve tried as you told me but after reboot I’ve the same problem,
I don’t how permanent add to /boot/grub2/grub.cfg :
for avoid that at every grub update that line is removed.
Do you know how do it?
sudo grubby --args="rd.driver.blacklist=tpm modprobe.blacklist=tpm" --update-kernel=ALL
For your blacklist-tpm.conf did you also add:
"install tpm /bin/false"
i have the same issue using ‘rmmod tpm’ works for me but it is not permanent ,i have to repeat it every time i booted… is there any way to make it permanent …
I see no other option than editing the file
/boot/grub2/grub.cfg and add the command you would use to disable the tpm module in grub2. Disabling the linux module for tpm would have no effect as the linux kernel is not even loaded when the problem occurs.
Of course, running
grub2-mkconfig would undo your edit, so keep that in mind.
Trusted Platform Module (TPM) is a chip used to store secret keys with default state being unknown, disabled, inactive. If you use a device with OS implemented with TPM, you need to clear it in BIOS or issue ‘tpm_clear -force’ command with information here : INFOSEC.
I added “rmmod tpm” in grub.cfg and it worked for me… thank youu
I’m not sure why blacklisting the Linux kernel’s tpm module has anything to do with grub’s tpm module in reference to Jeff V’s response, but it’s not correct in this particular issue.
The issue is in grub not supporting TPM 2.0 likely very well when enabled in UEFI System Settings. The more basic functional fix to this is actually relatively simple to fix.
In /etc/grub.d/ create 02_tpm with the contents:
echo "rmmod tpm"
and chmod +x it.
grub2-mkconfig -o /etc/grub2.cfg and your problem is mitigated for the time being. This is NOT a fix, just a mitigation for current issues with grub and TPM support specifically.
Thanks a lot! You make a simple and very reliable temp solution! I also did having a similar problem. And you did resolved!
My issue: GRUB is output error of tpm.c module after 371 DBX update
I wait a bug fix from Fedora team. Because update of BIOS is not resolved problem. Reset Secure Boot databases to factory state is resolved, but if I update DBX database, I get a this error. Thanks a lot! | OPCFW_CODE |
package tree
import (
"fmt"
"github.com/anchore/stereoscope/pkg/tree/node"
)
// Tree represents a simple Tree data structure.
type Tree struct {
nodes map[node.ID]node.Node
children map[node.ID]map[node.ID]node.Node
parent map[node.ID]node.Node
}
// NewTree returns an instance of a Tree.
func NewTree() *Tree {
return &Tree{
nodes: make(map[node.ID]node.Node),
children: make(map[node.ID]map[node.ID]node.Node),
parent: make(map[node.ID]node.Node),
}
}
func (t *Tree) Copy() *Tree {
ct := NewTree()
for k, v := range t.nodes {
if v == nil {
ct.nodes[k] = nil
continue
}
ct.nodes[k] = v.Copy()
}
for k, v := range t.parent {
if v == nil {
ct.parent[k] = nil
continue
}
ct.parent[k] = v.Copy()
}
for from, lookup := range t.children {
if _, exists := ct.children[from]; !exists {
ct.children[from] = make(map[node.ID]node.Node)
}
for to, v := range lookup {
if v == nil {
ct.children[from][to] = nil
continue
}
ct.children[from][to] = v.Copy()
}
}
return ct
}
// Roots is all of the nodes with no parents.
func (t *Tree) Roots() node.Nodes {
var nodes = make([]node.Node, 0)
for _, n := range t.nodes {
if parent := t.parent[n.ID()]; parent == nil {
nodes = append(nodes, n)
}
}
return nodes
}
// HasNode indicates is the given node ID exists in the Tree.
func (t *Tree) HasNode(id node.ID) bool {
if _, exists := t.nodes[id]; exists {
return true
}
return false
}
// Node returns a node object for the given ID.
func (t *Tree) Node(id node.ID) node.Node {
return t.nodes[id]
}
// Nodes returns all nodes in the Tree.
func (t *Tree) Nodes() node.Nodes {
if len(t.nodes) == 0 {
return nil
}
nodes := make([]node.Node, len(t.nodes))
i := 0
for _, n := range t.nodes {
nodes[i] = n
i++
}
return nodes
}
// addNode adds the node to the Tree; returns an error on node ID collisions.
func (t *Tree) addNode(n node.Node) error {
if _, exists := t.nodes[n.ID()]; exists {
return fmt.Errorf("node ID collision: %+v", n.ID())
}
t.nodes[n.ID()] = n
t.children[n.ID()] = make(map[node.ID]node.Node)
t.parent[n.ID()] = nil
return nil
}
// Replace takes the given old node and replaces it with the given new one.
func (t *Tree) Replace(old node.Node, new node.Node) error {
if !t.HasNode(old.ID()) {
return fmt.Errorf("cannot replace node not in the Tree")
}
if old.ID() == new.ID() {
// the underlying objects may be different, but the ID's match. Simply track the new [already existing] node
// and keep all existing relationships.
t.nodes[new.ID()] = new
return nil
}
// add the new node
err := t.addNode(new)
if err != nil {
return err
}
// set the new node parent to the old node parent
t.parent[new.ID()] = t.parent[old.ID()]
for cid := range t.children[old.ID()] {
// replace the parent entry for each child
t.parent[cid] = new
// add child entries to the new node
t.children[new.ID()][cid] = t.nodes[cid]
}
// replace the child entry for the old parents node
delete(t.children[t.parent[old.ID()].ID()], old.ID())
t.children[t.parent[old.ID()].ID()][new.ID()] = new
// remove the old node (if not already overwritten)
if old.ID() != new.ID() {
delete(t.children, old.ID())
delete(t.nodes, old.ID())
delete(t.parent, old.ID())
}
return nil
}
// AddRoot adds a node to the Tree (with no parent).
func (t *Tree) AddRoot(n node.Node) error {
return t.addNode(n)
}
// AddChild adds a node to the Tree under the given parent.
func (t *Tree) AddChild(from, to node.Node) error {
var (
fid = from.ID()
tid = to.ID()
err error
)
if fid == tid {
return fmt.Errorf("should not add self edge")
}
if _, ok := t.nodes[fid]; !ok {
err = t.addNode(from)
if err != nil {
return err
}
} else {
t.nodes[fid] = from
}
if _, ok := t.nodes[tid]; !ok {
err = t.addNode(to)
if err != nil {
return err
}
} else {
t.nodes[tid] = to
}
t.children[fid][tid] = to
t.parent[tid] = from
return nil
}
// RemoveNode deletes the node from the Tree and returns the removed node.
func (t *Tree) RemoveNode(n node.Node) (node.Nodes, error) {
removedNodes := make([]node.Node, 0)
nid := n.ID()
if _, ok := t.nodes[nid]; !ok {
return nil, fmt.Errorf("unable to remove node: %+v", nid)
}
for _, child := range t.children[nid] {
subNodes, err := t.RemoveNode(child)
for _, sn := range subNodes {
removedNodes = append(removedNodes, sn)
}
if err != nil {
return nil, err
}
}
removedNodes = append(removedNodes, t.nodes[nid])
delete(t.children, nid)
if t.parent[nid] != nil {
delete(t.children[t.parent[nid].ID()], nid)
}
delete(t.parent, nid)
delete(t.nodes, nid)
return removedNodes, nil
}
// Children returns all children of the given node.
func (t *Tree) Children(n node.Node) node.Nodes {
nid := n.ID()
if _, ok := t.children[nid]; !ok {
return nil
}
if len(t.children) == 0 {
return nil
}
from := make([]node.Node, len(t.children[nid]))
i := 0
for vid := range t.children[nid] {
from[i] = t.nodes[vid]
i++
}
return from
}
// Parent returns the parent of the given node (or nil if it is a root)
func (t *Tree) Parent(n node.Node) node.Node {
if parent, ok := t.parent[n.ID()]; ok {
return parent
}
return nil
}
func (t *Tree) Length() int {
return len(t.nodes)
}
| STACK_EDU |
Randomness is a slippery term that conveys different meanings in different disciplines. In mathematics, an individual number is random when there is an equal chance for it to be any number from a set of possible values. In computer science the term becomes more relative and numbers have varying degrees of pseudo-randomness. Information theory equates randomness with unpredictability and, at odds with other definitions, concludes that a higher level of randomness indicates a greater concentration of information; a message’s probable denseness of information is highest when the message is partially surprising and partially expected.There is no fixed definition for what randomness means in art, but analogies can be drawn to how the term is used in other fields. For example, information theory’s definition might suggest that artworks have the greatest impact when using a mixture of pattern and unpredictability.
Random is often used colloquially to indicate arbitrariness or things unrelated: random acts of violence, random thoughts, random encounters. A number of fields such as computer science, statistics, and informational theory have more rigorous definitions of randomness. But each of these fields uses the term in a way that is slightly at odds with the others.
As a starting point, let’s establish what randomness means to a mathematician and, using that, build a working definition for what randomness might mean to an artist. In mathematics, an individual number is random when there is an equal chance for it to be any number from a set of possible values. When describing a sequence of numbers as random, we mean each number is statistically independent of the others; that the numbers in the series have no effect or relation to the others (Haahr, 2008). A random number or sequence is characterized as containing no meaningful information; if a number conveys some data (such as the result of a formula, a person’s phone number, or the number of times the letter ‘q’ appears in this chapter2), then it is not random.
This trait of non-significance can be borrowed and used as a key characteristic of randomness in art. If an element in an artwork contains some meaningful information about the world around us, then the element isn’t truly random. Consider this recipe by Tristan Tzara (one of Dada’s founders) for writing poetry:
To Make A Dadist Poem
Take a newspaper.
Take some scissors.
Choose from this paper an article the length you want to make your poem.
Cut out the article.
Next carefully cut out each of the words that make up this article and put them all in a bag.
Next take out each cutting one after the other.
Copy conscientiously in the order in which they left the bag.
The poem will resemble you.
And there you are--an infinitely original author of charming sensibility, even though unappreciated by the vulgar herd. (Brotchie, 1991, p. 36)
Key Terms in this Chapter
Pseudorandom Random Number: A number that was generated using an algorithmic process called a pseudorandom number generator (PRNG). Because the numbers are created deterministically they have the appearance of randomness, but are not truly random.
Hardware Random Number Generator: A method for generating random numbers using a physical process, such as the nuclear decay of radioactive material. The generated numbers are often referred to as “true random” numbers in contrast with pseudorandom numbers generated by a pseudorandom number generator.
Chaotic: behaviors where minor changes in initial conditions can result in widely divergent results. Chaotic systems often appear random even though they are completely deterministic.
Chance: In this chapter “chance” refers to unpredictable, but deterministic, events.
Generative Art: Art that is created according to an algorithm. Generative art is typically intended to give the appearance of machine creativity.
Deterministic: A situation where events are completely predictable based upon cause and effect.
Algorithm: A set of well-defined instructions for completing a task.
Random: used in this chapter to specifically refer to unpredictable events that are completely self-contained and communicate no information (in contrast to “chance”).
Quantum: Used in this chapter to refer to subatomic processes.
Stochastic: having unpredictable characteristics. Used in this chapter to refer to both random and chance events. | OPCFW_CODE |
View Full Version : More MySQL 'ORDER BY' Woes
06-01-2006, 04:25 AM
I read about the difficulties Marjolein had with 'ORDER BY' yesterday, only to run into a similar problem of my own this morning! I just happened to check the small ads page at http://www.drascombe-association.org.uk/smallads.htm and found that it wasn't showing any results from the database. After an hour or so of hair-tearing I discovered that I could get the results to display again by removing the 'ORDER BY' statement from the following query:
$Query = "SELECT *, DATE_FORMAT(adate, '%D %M %Y') AS displaydate
FROM $TableName1 AS t1, $TableName2 AS t2
WHERE t1.boattype = t2.type AND t1.sold='no' AND t1.adate >= '$LastDanDate'
ORDER BY t1.adate DESC";
I'm pleased to have got it working again, but it's less than ideal that the adverts now appear in more or less random order. I've also had to amend most of the admin pages in the same way.
I suspect strongly - and I'm awaiting confirmation - that my hosting provider has done a version upgrade. The old pages still work on my test server, which is running MySQL 3.23.49, while the production server is running MySQL 4.1.18.
It seems to be the fact that I'm querying more than one table which breaks it. Can anyone spot a problem with my query, or suggest a workaround?
06-01-2006, 05:32 AM
This has all been fixed. I got the following reply from my hosting provider:
The /tmp directory became full at 4am due to an end of month run. As php uses the /tmp directory this may have affected some code, as we got an error when running some php code today of no space left on device (device being the /tmp folder). Php has to be able to create files in the /tmp folder to work with certain coding, so this may be the fault. We have now rectified this, so may be your code would work again as it was before. Apologies for any inconvenience. Sure enough, the old version of my page worked again. I could have done with knowing this before I wasted a couple of hours trying to fix it myself! Anyway, at least it isn't a MySQL version problem, which would have been a bit more of a problem.
06-01-2006, 05:54 AM
This has all been fixed. I got the following reply from my hosting provider:
Sure enough, the old version of my page worked again. I could have done with knowing this before I wasted a couple of hours trying to fix it myself! Anyway, at least it isn't a MySQL version problem, which would have been a bit more of a problem.Gosh. I am almost speechless.
Glad to hear it's fixed... but if this is a system /tmp folder on a shared server (which it sounds like) this should of course never happen. It's not even something you can workaround with ini_set() (although some ini directives have a default starting with /tmp).
Note that if you are using file-based sessions, this also normally uses /tmp, so if those (for non-persistent sessions) are not being cleaned out frequently enough, the /tmp device can get full; but - still assuming this is shared hosting and not a dedicated server of VPS - that's not solely up to you but to the combined effect of all sites hosted on the same system using the same /tmp device... (this is a well-know disadvantage of using file-based sessions on a shared system).
BTW, looking at your description again, and the fact you could make it work again (crippled) by removing the ORDER BY clause, suggests it's MySQL running out of (temp?) memory rather than PHP: both queries will just deliver a resource identifier to PHP which you must then process to pull the information out of it - and the amount of that information is not different when it's ordered or when it's not!
06-01-2006, 06:29 AM
Gosh. I am almost speechless.
Glad to hear it's fixed... but if this is a system /tmp folder on a shared server (which it sounds like) this should of course never happen. ...
As it happens, I've been less than happy about this particular host for a while now. I've moved all the paying clients to another server, and I'm intending to move this one (a freebie) when the current period they've paid for runs out.
I don't know enough about server configuration to run one myself (apart from the out-of-the-box My JSAS server which I do my development on), nor do I wish to learn!
the /tmp directory became full
For a way to define your own session directory see this (http://www.php.net/manual/en/ref.session.php#54881) php.net article
06-01-2006, 09:38 AM
For a way to define your own session directory see this (http://www.php.net/manual/en/ref.session.php#54881) php.net articleYes, that is indeed possible: It will help to avoid the problems with sessions as a result of a full /tmp (or even a /tmp directory with very many files in them). If you are using file-based sessions it is indeed a good idea to move them to a directory under your own control. (Or not to use files at all, but instead use a database.)
But on a shared system whether the /tmp directory becomes full may still be due to other clients' websites on teh same box - so while moving your session files may avoid problems with those sessions you may still encounter problems with PHP due to a full /tmp directory. And not all php.ini settings with /tmp in their (default) path can be changed by PHP at runtime.
The fact remains that on a shared hosted system it's the resposibility of the host to ensure that the /tmp direcory does not become full: the admins of the hosted sites have no access to it or control over it. If /tmp does become full it's a symptom of bad system management or an overloaded box.
But on a shared system whether the /tmp directory becomes full may still be due to other clients' websites on teh same box - so while moving your session files may avoid problems with those sessions you may still encounter problems with PHP due to a full /tmp directory.DUH!
I can't believe I overlooked that point -- the odds are very high that the space one uses may well be on the same partition as /tmp.
PostgreSQL Session Save Handler (http://www.php.net/manual/en/ref.session-pgsql.php) documentation.
06-02-2006, 10:59 AM
I can't believe I overlooked that point -- the odds are very high that the space one uses may well be on the same partition as /tmp.Actually from what I know about hosting, the odds are very high that /tmp is a global system directory on a separate partition; and likely with everyone's hosted system on a separate partition, too, and a size limit on everything. The /tmp would be shared by everyone's systems though so if PHP makes use of that at all (even if not for storing session files) it's going to be limited by what the other systems are doing. Even on a VPS, there is likely to be a shared root-level /tmp directory (or device) that is being used by your system.
I'm using a nice class to store my sessions in a MySQL table, BTW.
Actually from what I know about hosting, the odds are very high that /tmp is a global system directory on a separate partitionVPS can be done either way - chroot (virtual root) or discrete partition space.
Single-domain web-hosting/mail/SQL is more likely done with shared partition space to improve memory utilization.I'm using a nice class to store my sessions in a MySQL table, BTW.Would expect nothing less from you.
06-02-2006, 10:10 PM
VPS can be done either way - chroot (virtual root) or discrete partition space.Indeed - I only said "likely" for VPS. My hosting deal is VPS with a virtual root, and it's still common though newer virtualization software is now becoming more common (various flavors) that does away with that and gives you discrete space with real (rather than virtual) root access.
For selecting a new host I'll be looking for the latter, having experienced the disadvantages of virtual root access: big difficulties installing software that expects real root access, for instance - the whole "vinstall" deal is meant to work around that, the host having done all the adaptations for you. It often means big changes in make files and such and recompiling an app to do that - I once struggled through that (not having much experience with make) and am not looking forward to ever having to do that again. But then what is offered with "vinstall" is normally a limited choice of (commonly-used) packages. If you need something else you're out of luck and will have to do it yourself... and without support.
This is getting rather off-topic but I'm mentioning it anyway for those lurkers who at some time might be looking for a VPS-type hosting solution: carefully consider what kind of root access is provided and whether you might ever need to install software not in the standard set supplied.
vBulletin® v3.8.7, Copyright ©2000-2013, vBulletin Solutions, Inc. | OPCFW_CODE |
Update: Only 4 days left to crowdfund Marty! They are over 70% towards reaching their goal!
Marty is a WiFi enabled, programmable walking robot that can be customised with 3D printed parts. Designed to be easy to use for beginners, Marty can nonetheless be used for some pretty advanced stuff.
You can program Marty in various languages from widely used graphical language Scratch to Python and C++, and he has lots of ports for expansion. Marty can even be upgraded with an optional onboard Raspberry Pi. With one of those, you could do vision processing on board or even run ROS – the Robot Operating System.
Marty is on Indiegogo now priced at £95 ($125) for a full kit, with options to buy just the electronics (and 3D print your own plastic parts) for less.
One of the key technological innovations in Marty is the unique leg mechanism, which uses fewer motors than a traditional bipedal robot, while still retaining the ability to perform interesting movements like walking, turning (on the spot or while walking), dancing, and kicking a ball. Each leg has three motors and a pair of four bar linkages, as well as a spring to help carry some of the weight. The design reduces cost and weight, while making Marty easier to program and his battery last longer.
Programming happens over WiFi – you write code in Scratch, Python, or another language on your computer, and your code controls Marty in real time. If you want to go fully autonomous you can write code to run on Marty’s control board, or you can add a secondary computer onboard – like a Raspberry Pi or Arduino.
The electronics themselves are designed to be an ideal controller for a robot. Servo ports with electrical current sensing give some feedback about how the robot is getting on – protecting the motors from damage and giving information on interactions. WiFi for communications, accelerometer for tilt data, and lots of spare ports for different types of sensors and outputs. Onboard power regulation means you can plug in a battery directly, and they’ll even output a 5V supply to power an onboard Pi.
Alexander, designer of Marty, sees Marty as an engaging robot for kids, makers and educators.
“Marty started out as a side project during my PhD. I wanted to make something with a lot of the features of an expensive research robot, but at a consumer price point. In the lab, we did a lot of demos to visitors and walking robots always grabbed attention way more than wheeled bots. They’re more engaging and people empathise with them more, but they also allow the exploration of some more advanced topics of robotics. So I set out to make a walking robot about the price of a smart toy, one that provided some real open-ended opportunities for learning”
Marty also embraces 3D printing, “We’ll be making the designs available so that anybody with a printer can make their own Marty, and we’ll encourage people to modify the designs and share what they make. We envisage a kind of ‘app store’ for both hardware and software, where 3D printable parts sit alongside the code to make them work.”
What would you do with a Marty? | OPCFW_CODE |
This is how you setup plausibility checks in questionnaires
How do I setup plausibility checks in questionnaires?
Internet surveys permit quality checks and data checks as early as during entry. Minor errors and missing information can thus be noted immediately during entry. The respondent will receive a short message indicating that they may have overlooked something. You will not have to ask follow-up questions afterwards but can correct the incorrect information immediately during entry instead. Checks for the correctness of answers are commonly referred to as “plausibility checks”.
Why plausibility checks?
Plausibility checks are used to ensure a certain level of data quality. Sometimes, respondents will overlook a question or commit minor errors when setting an answer.Sometimes, questions are also answered incompletely, if the respondent for example wants to get an overview of the questionnaire first or completes the questionnaire without serious interest. The information given should be checked especially if branchings at later stages in the questionnaire are based on the answers to preceding questions. Plausibility checks are advisable in the following cases:
- The answer to the question is taken up in later questions, either in a filter condition or in a display of dynamic questions.
- Assuring fill-in instructions, such as “Select the three most important properties of products xy”.
- Answers to a specific question are of particular interest for the evaluation.
- The internal consistency of the data is of particular importance for the evaluation.
- The data being queried already exist in participant administration.
Planning the Use of Plausibility
Be aware of which answers are important for the routing of the survey and which data are central to the evaluation purpose. However, use plausibility checks with care. For the respondent, plausibility checks are, first and foremost, annoying and tiresome because their behavior is questioned and their progress through the survey is hindered. Too many plausibility checks will destroy the relationship of trust between you and the respondent. Also, take into consideration that respondents may not yet have formed an opinion on some questions. In such cases, you will often achieve better data quality by allowing incomplete information than by forcing the respondent to give an answer which might only be valid for the moment and not actually meant by the respondent.
Think about aspects such as the following:
- Which answers are indispensable for the routing of the survey?
- To which questions do you want to obtain answers from as many respondents as possible?
- How would you react if your answer to a question were rejected as incorrect by a survey system?
In addition to the psychological effects on the respondents, there are technical aspects as well:
- On each questionnaire page, you can create as many plausibility checks as required. But plausibility checks require a lot of server resources, i.e. they affect the performance of the questionnaire, similar to other checks and dynamic features. Therefore, it is recommended to use not more than 50 plausibility checks per page.
- For plausibility checks, no dedicated sort order is implemented. I.e. while you may arrange the checks on a specific page in a specific sequence, this order may get lost especially if copying or importing the project.
- The conditions of plausibility checks should refer only to variables which have a defined value. If condition variables are not filled at all or contain missings (e.g. because a question or answer option has been hidden or if a respondent hasn’t filled an entry field) the checks can have unexpected or even wrong results.
- Try to avoid complex plausibility check conditions or to substitute them by multiple simple plausibility checks. This way it is easier to get an overview of the variables used in the plausibility check and to avoid unexpected results.
- If necessary, the pro editor allows to define complex conditions.
- In general, it is recommended to abstain from applying both hiding conditions and plausibility checks to the same variables.
- When dealing with question types whose variables have characteristics (e. g. single response list), please note: As soon as a single answer option of the question, i.e. one characteristics of the variable, is hidden, the variable will be defined as “missing” when used in a plausibility check. Therefore, in this case, the actual value of the variable cannot be determined by means of a plausibilty check. If you cannot or do not want to work without the plausibility check, you might want to consider using a list instead of hiding conditions.
- If a plausibility check checks several variables, and only one or a few of these are affected by hiding conditions, it may make sense to deploy the option “Execute check if one or more items are blinded out?”. If this option is activated, the variables affected by the hiding condition will simply be skipped when executing the check. Mind that if all variables of the check are affected, it will not be executed, to prevent that participants get stuck on the page.
Example: Plausibility Check
Let’s assume that you ask the participants of your survey to enter their year of birth into an open entry field. You want to ensure that they enter only reasonable values: The values should be four-digit numbers inside a reasonable range, e.g. between 1900 and 2000.
To realize this with a plausibility check, please proceed as follows:
- In the questionnaire editor, choose the page on which you want to perform the plausibility check. Click on the title of that page. The page view will open.
- Click on the Plausibility checks menu.
- The overview of plausibility checks is opened. No checks have been defined yet.
- Click on the + Plausibility check button.
- Enter the title.
- Select the “Range check” check type. For detailed explanations on the check types, please see this post.
- Confirm with Proceed.
- The entry dialog is opened.
- In the “User may ignore this check” field, the “No” option should be activated. This means that the person completing the questionnaire must correct any incorrect entry. Otherwise, the next survey page will not be displayed.
- Do not change the setting of the field „Execute check if one or more items are hidden??“. In the current example situation, it does not matter as there are no hidden items.
- In the following field, you can edit the message which is displayed if the check condition applies. Please replace the default message “An error occurred!” by “Please enter your year of birth as a four-digit number!”.
- Next, define the check condition itself: First, select the variable which you want to check.
- Then, enter the range within which the values are to be valid. In the example shown, “1900” is the minimum, “2000” the maximum.
- Click on the Save button.
- To see the pop-up which will be displayed to the respondents, open the Page preview tab and enter an erroneous value to trigger the check.
- The new check is listed in the overview. | OPCFW_CODE |
Elections pose a challenge for all newsrooms: how to innovate in the coverage of an event that happens every two years? At LA NACION we are convinced that through data and interactive design we can achieve quality and differential productions.
The Argentine electoral system establishes two mandatory election instances. In the primary elections, the different political parties may present several lists. That is to say, carry an open internal party election. The second instance is the general election. It is defined at that moment who enters the Congress. This year, both on September 12 and November 14, Argentina elected the new representatives who entered the Congress on December 10.
LA NACION’s system processed, published and made available the minute-by-minute updated data from the National Electoral Board, which were asked by the main newsrooms of the world through lanacion.com.ar.
Conceptually, the information was structured as part of the political reading made by our politics professionals, ideologically grouping the political parties and taking into account the particularities of the alliances according to the territory.
Visualizations were developed as an interactive map to follow the detailed progress of the election throughout the country, a hemicycle that graphed the total number of seats obtained by each political party in Congress, comparisons between previous elections and annexes designed with a clear and representative interface to provide our audience with a better user experience.
The biggest challenge was to meet the goal of being the first mass media to publish election data with integrity and accuracy. In a newsroom, every second is intensely lived, and the project had to be adjusted to that pulse. When the Ministry of the Domestic Affairs announced and enabled the availability of the information, the system was executed, and the work of the whole team was put to the test.
REAL TIME RESULTS AND COMPARISONS
After the National Electoral Court publishes the first counting data, the biggest question is to receive the large volume of information, correct possible failures in the scripts and display them on the home page of our newspaper before competitors.
The electoral maps are historically the most viewed news on LA NACION’s website, they were reconsidered to offer more services to the user. Software was added to compare changes in the performance of the two main political forces of the country as regards the last primary elections. It was necessary to re-categorize the political parties that changed their orientation over the years in order to make accurate and representative calculations.
The main piece consisted of a map that may be visualized in two ways, depending on the level of detail desired by the user: a more general map organized by provinces/states and a more detailed map by each of the 531 municipalities that make up the country.
At the same time, data from both primary elections and previous elections (year 2019) were reused to show comparisons of performance by province and political party regarding those previous elections and thus provide another informative focus to the audience.
During these elections, the renewal of half of the House of Deputies (127 seats) and one third of the Senate (24 seats) was voted. This phenomenon is important for political analysis as it shapes the relationship of powers within the Congress. In Argentina, a complex system called “D’Hont method” is used to calculate and translate all the votes cast for each party into seats. The logic of this calculation was considered in our development. This meant that it was not necessary to wait for official information received from the government agencies on who entered and who did not.
LA NACION assigned each of the parties a color to identify them and show the new composition of the Congress in real time. In turn, our development allowed us to survey province by province the names in each list that would enter Congress as of December 10: those who entered were assigned a party color over their first and last name and those who did not were marked in gray.
CLOSING THE ELECTORAL REGISTER
The National Electoral Court sets a deadline for political parties to submit their complete list of candidates. Since the institution takes then weeks to publish all the complete and disaggregated information, at LA NACION we worked to distinguish ourselves for our audience and obtain this information before anyone else. After visiting the web page of the National Electoral Court, making telephone calls to the representatives of the political parties and exhaustive monitoring of social media, we could to reconstruct the 383 lists (comprising 837 candidates) that were presented in the 23 national jurisdictions.
Thus, once the deadline for political parties to submit their lists was over, LA NACION was the only national medium to publish each one of the names that would compete in the elections to enter Congress.
MAP OF SCHOOLS
Another distinguishing piece of the coverage of LA NACION was a map where more than 16,889 voting centers were geolocated and the results. In each one of the schools you may see the vote for each party and the comparison with the previous election. It is also possible to see in a pdf file the minutes of the vote count of each of the voting tables of the schools to compare the results. It should be clarified that in Argentina voting is still done with paper ballots and that the counting of votes is done manually.
This task required an intense work of structuring, managing and data cleaning. There was not a list of schools to match them by ID. Therefore, they were georeferenced using different techniques and processed with different programming languages, since they were not originally published with latitude and longitude data either. About 3,000 were manually geolocated by the members of the team.
Within the map, schools are colored yellow, light blue, red, purple or gray according to the political party that won in that specific school. By clicking on the map or searching for the selected school in the search engine, you can see how the voting went in that school and the comparison to the previous election. By clicking on the “view telegrams” button, you can access the pdf file that shows the “analog” result ‒in Argentina a voting record is issued for each voting station‒ with the votes.
In addition to the classic or expected developments for the election, at LA NACION we like to innovate by identifying the current issues of each election and translating them into attractive news articles. After the primary elections, where the ruling party suffered a defeat, the question that arose for the general elections was whether the government could recover the votes and win the election in the Province of Buenos Aires.
With this idea in mind, a comparator was developed where, by manipulating different variables such as voter turnout, blank voters or the capitalization of the vote of other political forces, the audience could pose different electoral scenarios.
AUTOMATICALLY CREATED NEWS ARTICLES
In order to achieve the most massive, federal and personalized coverage possible, on election night we published automatically created news articles containing the results of the election in the province, along with graphics and images. These news articles were refreshed with new information every 30 minutes, as the counting of the votes progressed.
Amazon Web Services (AWS): S3, Lambdas, Cloudwatch/EventBridge an RDS were used and also Python 3.6 and PostgreSQL.
This allowed each reader to have access to accurate data and to know the result of the voting in the place where he/she lives.
What tools, techniques, technologies did you use, and how did you use them?
More than seven months in advance, the step-by-step development process was carefully planned in order to give a quick and effective response to the audience, but also to all the sections of the newspaper that consume our content for their stories on the election day. An interdisciplinary work team was organized with programmers, designers, data journalists and political journalists to take into account all aspects of the coverage.
To accomplish this purpose, a workflow was devised to connect the different parts of the team. The most suitable technologies were researched and used, and software development practises were applied to fit the needs of the project.
For the backend development and infrastructure, it was used: AWS Route53, AWS Elastic Load Balancing, AWS Backup, AWS CloudFront, AWS EC2, AWS S3, AWS Lambda, AWS Cloud Watch, AWS RDS (PostgreSQL), AWS SNS, Fastly, Akamai, Jenkins, API Integrations, Docker, Python 3x, Django, SQL.
At the same time, a contingency plan was designed for any eventuality with the use of Linux Virtual Hosts, database servers and the system installed in the offices of LA NACION (on-premise).
What was the hardest part of this project? What should the jury know to better understand what you did and why it should be selected?
Electoral coverage poses several challenges. One of them is technical: between elections, the government does not usually maintain the same data structure and often modifies or even deletes some key fields such as IDs. This presents a challenge when it comes to match data and put together historical comparisons between elections.
At the same time, many times the disaggregation of the information is insufficient. However, at LA NACION we always seek to go one step further. For that reason, for example, we decided to map with several programming techniques and also manually more than 16,000 voting stations across the country to know the result school by school and voting table by table. We even added the original pdf file of the vote count so that the reader may compare the “digital” and the “analog” result.
Another challenge, as regards project management, was to join planning and electoral work, which involves many areas of the newspaper ‒including programmers, designers, data journalists and journalists specialized in politics‒, 100% remote.
What can other journalists learn from this project?
The highest learning from this project is the use of technology to innovate, to stand out with respect to competition in journalistic productions and to try to improve ourselves year after year.
Also, the importance of interdisciplinary work between programmers, designers and data and political journalists to have all perspectives of the coverage and develop innovative ideas.
Furthermore, we believe that in projects of this size it is fundamental to carefully make a plan in stages, to research and use the most convenient technologies and implement software development practices that fit the project needs. At the same time, the evaluation of alternatives to possible contingencies that may arise on the same day of the election when receiving the data.
Finally, we believe that it is important to have a journalistic ambition and instinct to identify current trends that may become news articles and also not to be satisfied with the existing data and always try to find new alternatives to offer a different product to our audience. | OPCFW_CODE |
/* eslint-disable no-console */
import { IAuthenticatedUser } from './IAuthentication';
import { IAuthenticationProvider } from './IAuthenticationProvider';
import { Request } from 'express';
// Note: express http converts all headers
// to lower case.
export const AUTH_HEADER = 'authorization';
const BEARER_AUTH_SCHEME = 'bearer';
const AUTH_SCHEME_REGEX = /(?<scheme>\S+) +(?<value>\S+)/u;
export class BearerAuthenticationProvider implements IAuthenticationProvider {
public readonly securityScheme = 'bearerAuth';
public async getAuthenticatedUser(request: Request): Promise<IAuthenticatedUser | null> {
const token = this.getBearerTokenFromRequest(request);
if (!token) {
return null;
}
return {
token,
};
}
private getBearerTokenFromRequest(request: Request): string | null {
const authHeader = request.headers[AUTH_HEADER];
if (!authHeader) {
console.debug(`Missing ${AUTH_HEADER} header in request`);
return null;
}
const regexMatch = AUTH_SCHEME_REGEX.exec(authHeader);
if (!regexMatch) {
console.debug(`Header ${AUTH_HEADER} is not in a valid format [scheme value]`);
return null;
}
const [, authScheme, rawToken] = regexMatch;
if (authScheme.toLowerCase() !== BEARER_AUTH_SCHEME.toLowerCase()) {
console.debug(
`Header ${AUTH_HEADER} doesn't use ${BEARER_AUTH_SCHEME} scheme. Found : "${authScheme}"`
);
return null;
}
return rawToken;
}
}
| STACK_EDU |
Upgrade to X6 about 10 days ago and at least initially, I was saving in both X5 and X6 formats because I've learned that you can't always trust a new version of Draw. Love the application but, as all long-time users know, new releases have a history of instability.
After a few days of fairly heavy use, it seemed like I was not going to have the usual problems and I stopped saving my work in X5 format. Probably a mistake because two days ago, it started crashing regularly on just about any document I've created with X6 and the crashes have become more frequent yesterday and today.
For the record, Win7 Pro 64-bit, 8gb RAM, Core 2, dual nVidia 8600GT cards with current WHQL driver, and a 250gb SSD boot drive. Data drive is a 750gb spindle with 450gb free.
When it crashes, the problem details are as listed below. I've tried resetting the workspace <F8> and, have done a repair from the installation routine, no avail. All my current documents are mostly curves, text, some Lens and some Extrudes. If anyone can shed some light, I'd appreciate it but I think for the time being, I'm going to save all my X6 files back to X5 and stick with the previous version.
Problem signature: Problem Event Name: APPCRASH Application Name: CorelDRW.exe Application Version: 220.127.116.117 Application Timestamp: 4f4c60a4 Fault Module Name: CRLCLR.dll Fault Module Version: 18.104.22.1687 Fault Module Timestamp: 4f4c60eb Exception Code: c000041d Exception Offset: 00000000000350aa OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 1033 Additional Information 1: bbf4 Additional Information 2: bbf4366adc015c7ac77df54f7afd5eca Additional Information 3: 3b04 Additional Information 4: 3b045daaeb4518b10111589a140a3e97
Just a thought... .NET just had patch updates from MS in the past couple of days. Could it be related (since you say it just started suddenly, and recently)?
I haven't had to use Draw since, but I've heard more than one time of .Net updates causing folks troubles.
In reply to Andrew:
Andrew: Thanks for the thought but I didn't do the 13 file Windows update with the dot NET files in it until this morning..long after I'd been having crashes with X6.
Hello kilogbravo; Just a thought, but you may want to check and see if there is a up date for the Bios. And a question are you running a out of the box Windows or a off the shelf computer co. ver.?
In reply to TheSign Guy:
TSG: I always build my own desktop machines and this is an OEM box copy of W7Pro 64. As for the mobo BIOS, it's been in the machine with X5 running fine for as long as X5 has been out. Today though, after saving all my current X6 files in X5 format, when I started opening and editing them, X5 STARTED crashing, too so obviously, I'm not a happy camper. The current plan is to uninstall both then only re-install X5 along with the SP's. Why Corel can't provide a stable application like Photoshop or Illustrator (which never crash on my system,) is beyond me and honestly, if Illustrator was even close to as good as X5, I'd say adios to Corel for good. Unfortunately, Illustrator still lags far behind.
In reply to kilobravo:
TSG: FYI, the AMI BIOS update did not solve the problem.
Kilobravo; I have had one crash with X 6 64 bit trying to import a large XLS file and I use it 6 days a week, also have a 32 bit X 6 on a Vista OS and it has never crashed. I get a lot of different type of file to open. People have even sent me file they want a sign made like that was done in the Windows Paint or what ever it's called. The X 6 is the best out of the box ver. I've had in a LONG LONG time, and it does need some work but WOW if it was any better out of the box it wouldn't be a Corel Product. Yea I think a build is the best way to go for a graphics computer not that a off the shelf won't work. I just though if it wasn't a clean Windows it may be something they stuck in that was causing the problem!.
George: Appreciate the feedback and I certainly wish I could say I've only had one X6 crash. Worse, now X5 is crashing regularly even after uninstalling both and re-installing only X5, SP3, and HF4. Running an SFC on Windoze now but doubt it'll find anything. After that, a full malware scan.
However, it just occurred to me that all the X5 files I've been opening and editing since the re-install are all former X6 files saved in X5 format. So, I'll go back to some known X5 files now and see if I can get it to crash.
Thanks again and if I find out anything, I'll pass it along here.
No help, just commiseration. Your posts have me quaking in my boots. Headed off to back up my x5 backups separately so they're not overwritten by x6.
In reply to jccorel:
In reply to harryLondon:
Harry: No page numbering, plus SFC found nothing wrong and no malware after a thorough scan however..
- I trimmed my installed fonts down from about 400 to 300 which substantially improved the sluggishness of X5 and..
- I've been working on a new file created from an X5 template now for over an hour and NO CRASHES..yet. <fingers crossed>
So, it MAY have been too many fonts, can't say for sure. I'll report back after I test it some more and if I still don't see any crashes, I'll install X6 and have another look.
Thanks for the suggestion though..
Update: Unfortunately not only have not found the problem but the situation is worse in that both X5 and X6 are crashing regularly.
I uninstalled both X5 and X6 then only re-installed X5. The first thing I noticed was that SPLWOW64.EXE (print spooler aux file for 32-bit apps running on 54-bit OS,) was apparently hanging up as I was finding it in the "wait chain" of the Windoze Resource Monitor process watcher. Googled that error until I was blue in the face and couldn't find any specifically relevant problems with Corel apps.
Then I thought it might be the printer drivers so I removed them completely via printmanagement.msc but no help, X5 kept crashing and here are multiple "problem signatures" from those crashes:
Problem signature: Problem Event Name: APPCRASH Application Name: CorelDRW.exe Application Version: 22.214.171.1245 Application Timestamp: 4e52a655 Fault Module Name: CRLCLR.dll Fault Module Version: 126.96.36.1995 Fault Module Timestamp: 4e52aa45 Exception Code: c0000005 Exception Offset: 0002b0a6 OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789Problem signature: Problem Event Name: APPCRASH Application Name: CorelDRW.exe Application Version: 188.8.131.525 Application Timestamp: 4e52a655 Fault Module Name: unknown Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Code: c0000005 Exception Offset: ed0de023 OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 1033
There were multiples of each of these two, by the way.
I then ran three more SFC /scannow's and Windoze said all was well every time.
I then decided to do a minimal re-install of X6, no other apps and unfortunately, X6 started crashing almost immediately and during various operations, i.e., saving a file, clicking on another page, copying to the clipboard, etc.
So, I am out of answers and it sure would be nice if a tech from Corel would read this and offer some advice..(no offense to those non-Corel folks who have kindly offered suggestions so far. Needless to say, I'm very frustrated and upset that I'm not getting some much-needed work done.
One rather important point I forgot to mention is, I get the exact same crashes on my 6-month old Dell Lattitude i7 notebook running X6 on the same files. So, I'd day it's unlikely that it's a machine-specific problem. Both are running Win7 64 Pro with 8gigs of memory and both have SSD boot drives.
© Corel Corporation. The content herein is in the form of a personal web log ("Blog") or forum posting. As such, the views expressed in this site are those of the participants and do not necessarily reflect the views of Corel Corporation, or its affiliates and their respective officers, directors, employees and agents. Terms and Conditions / User Guidelines. | OPCFW_CODE |
7.2. Regression Analysis
- Linear Regression
- Polynomial Regression
- Stepwise Regression
- Nonlinear Regression
- Logit / Probit / Gompit
- Logistic Regression
- Multinomial Regression
- Poisson Regression
- Box-Cox Regression
The Regression Analysis is used to estimate the coefficients B0, …, Bm of the equation:
Y = B0 + B1X1 +…+ BmXm
given n observations on m independent variables X1, …, Xm and a dependent variable Y. The Stepwise Regression procedure also determines a subset of the selected variables which contribute significantly to the explanation of variation in the dependent variable.
It is possible to select any numeric column of data as the dependent variable and to select the columns to be included in the analysis as independent variables. A Regression Analysis can be performed by selecting one column as the dependent variable and at least one column as an independent variable. The program will not proceed unless this requirement is met. Regressions can also be run on a sub-set of cases as determined by a combination of factor columns. The Polynomial Regression procedure allows the choice of one independent variable, but will also require the degree of the polynomial to be entered.
The Variable Selection Dialogue contains a check box to include the constant term (or the intercept) in the analysis. The default is regression with constant as in the above equation. If this box is unchecked then the following equation without a constant term will be estimated:
Y = B1X1 + … + BmXm.
An important feature of regression models without a constant term is that the method they employ for calculation of R-squared and adjusted R-squared values is fundamentally different from that of regression with a constant term. Therefore, R-squared values calculated for regressions with and without a constant term are not comparable.
The standard method of calculating the R-squared value for regressions including a constant term can be expressed as:
R-squared = 1 – Var(Residuals) / Var(Dependent)
where Var() stands for variance. However, this definition fails completely when the constant term is omitted from the model. A better definition, which applies to both types of regression, can be made by reference to the ANOVA of Regression table, where Ssq() stands for sum of squares:
R-squared = Ssq(Regression) / Ssq(Total)
There is also a slight difference between Linear Regression and Polynomial Regression on one hand and Stepwise Regression, Analysis of Variance and General Linear Model procedures on the other, in the way they handle the degrees of freedom in regressions without a constant. In line with the most common approach in the literature, we here also calculate the degrees of freedom as (n – m, m) in Stepwise Regression, Analysis of Variance and General Linear Model procedures and (n – m, m – 1) in the Linear Regression and Polynomial Regression procedures.
Also, although both groups of procedures operate in double precision, there may be a slight difference between their estimates on the same set of data. The reason for this is that two completely different algorithms are used in each case: the Linear Regression and Polynomial Regression procedures are based on the square root free version of the Cholesky decomposition originally suggested by Gentleman (1974, Applied Statistics, 23, pp. 448-454), whereas the Stepwise Regression, Analysis of Variance and General Linear Model procedures are based on the SWEEP algorithm by Jennrich (in Statistical Methods for Digital Computers, ed. Enslein, Ralston, Wilf, 1977, Wiley, pp. 58-75). The first algorithm is more accurate but the second is more suitable for Stepwise Regression and Analysis of Variance. | OPCFW_CODE |
The documents distributed by this server have been provided by the author as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the author or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. Other restrictions to copying individual documents may apply.
Refereed International Journal Papers (with ISI impact factor):
- P. Bubenik, P. Dłotko, A persistence landscapes toolbox for topological statistic., accepted to Jounral of Symbolic Computations
- P. Dłotko, B. Kapidani, R. Specogna, Fast computation of cuts with reduced support by solving maximum circulation problems, IEEE Transactions on Magnetics, 10.1109/TMAG.2014.2359976.
- P. Dłotko, R. Specogna, Topology preserving thinning of cell complexes, IEEE Transactions on Image Processing, DOI:10.1109/TIP.2014.2348799.
- P. Brendel, P. Dłotko, G. Ellis, M. Juda, M. Mrozek, Computing fundamental groups of point clouds, Applicable Algebra in Engineering, Communication and Computing, accepted.
- H. Wagner, P. Dłotko,Towards topological analysis of high-dimensional feature spaces, Computer Vision and Image Understanding, Volume 121, April 2014, Pages 21-26.
- P. Dłotko,H. Wagner, Simplification of complexes for persistent homology computations, Homology, Homotopy and Applications, vol. 16(1), 2014, pp.49 - 63.
- P. Dłotko, R. Specogna, Lazy cohomology generators: a breakthrough in (co)homology computations for CEM, IEEE Transactions on Magnetics, vol. 50, issue 2, pp. 577-580.
- G. S. Cochran, Th. Wanner, P. Dłotko, A randomized subdivision algorithm for determining the topology of nodal sets, SIAM J. Sci. Comput., 35(5), B1034-B1054.
- P. Dłotko, R. Specogna, Physics inspired algorithms for (co)homology computations of three-dimensional combinatorial manifolds with boundary, Computer Physics Communications, Volume 184, Issue 10, October 2013, Pages 2257 - 2266, DOI: 10.1016/j.cpc.2013.05.006.
- P. Dłotko, R. Specogna, A novel technique for cohomology computations in engineering practice, Computer Methods in Applied Mechanics and Engineering (2013), pp. 530-542 DOI information: 10.1016/j.cma.2012.08.009.
- P. Dłotko, R. Specogna, Cohomology in electromagnetic modeling, Communications in Computational Physics (CiCP), Vol. 14, No. 1, 2013, pp. 48-76.
- P. Brendel, P. Dłotko, M. Mrozek, N. Zelazna, Homology Computations via Acyclic Subspace, Computational Topology in Image Context, LNCS 7309, pp.117-127,
- H. Wagner, P. Dłotko, and M. Mrozek. Computational Topology in Text Mining, Computational Topology in Image Context, LNCS 7309, pp. 68 - 79.
- P. Dłotko, A fast algorithm to compute cohomology group generators of orientable 2-manifolds, Pattern Recognition Letters 33 (2012), pp. 1468-1476, DOI: 10.1016/j.patrec.2011.10.005.
- P. Dłotko, M. Juda, M. Mrozek, and R. Ghrist, Distributed computation of coverage in sensor networks by homological methods, Applicable Algebra and Engineering, Communication and Computing, Volume 23, Issue 1 (2012), Page 29-58.
- P. Dłotko, W. G. Kropatsch, H. Wagner: Characterizing Obstacle-Avoiding Paths Using Cohomology Theory. CAIP (1) (LNCS) 2011: 310-317.
- P. Dłotko, R.Specogna, ''Efficient generalized source field computation for h-oriented magnetostatic formulations'',Eur. Phys. J.-Appl. Phys. (EPJ-AP), Vol. 53, 2011, 20801.
- P. Dłotko, T. Kaczynski, M. Mrozek, Th. Wanner, ''Coreduction Homology Algorithm for Regular CW-Complexes'', Discrete & Computational Geometry, 46(2), pp. 361-388, 2011.
- P. Dłotko, R. Specogna, "Critical analysis of spanning tree techniques'', SIAM J. Numer. Anal. Volume 48, Issue 4, pp. 1601-1624 (2010).
- P. Dłotko, R. Specogna "Efficient cohomology computation for electromagnetic modeling'', CMES: Computer Modeling in Engineering & Sciences, Vol. 60, No. 3, 2010, pp. 247-278.
- P. Dłotko, R. Specogna, F. Trevisan, "Voltage and current sources for massive conductors suitable with the $A-\chi$ Geometric Formulation'', IEEE Transactions on Magnetics, vol. 46, no. 8, a 2010, pp. 3069-3072.
- P. Dłotko, R. Specogna, F. Trevisan, "Automatic generation of cuts on large-sized meshes for $T-\Omega$ geometric eddy-current formulation'', Computer Methods in Applied Mechanics and Engineering (CMAME), Vol. 198, 2009, pp. 3765-3781. | OPCFW_CODE |
The next thrilling installment ...
I mentioned in an earlier Reply that I am also concerned to ensure that I know exactly which version of the IDE and what libraries were used to generate the program - and, at some future date, to be able to use the exact same libraries to recreate the program.
As well as libraries, my wireless application involves separate Master and Slave .ino files which use a shared .h file with some common data, such as the wireless channel and data rate.
My idea for keeping track of the libraries that are used to generate code, and the project specific .h files requires three simple elements.
First, there are the libraries that are included with the IDE when you download it. If I save a copy of the download file that contains the IDE I will always be able to reproduce that version of the IDE and all its libraries. It is a simple matter manually to put a copy of that file in the Archive Directory.
Second, there are the extra libraries (such as TMRh20's RF24 library) that I download separately. For those librarires all I need to do is save a copy of the libraries alongside the code. That is the reason for saving the archive copy of the .ino file in a directory. The Python program makes a note of any of those extra libraries that are referred to in the .ino file.
Third, with the regular Arduino IDE the only way I have found for referring to a shared .h file that is not in the same directory as the .ino file is to use the full path name
and that system works with my Python script
This Python program also has the (for me) very positive side effect that it is no longer necessary to create my file Master.ino in a directory called Master. The Python program will copy the program so that the Arduino requirements are complied with. It also works if Master.ino is inside a directory called Master.
When the upload is successful, Python will copy the .ino file, all the library files and any .h files into the Archive Directory called Master-YYYYMMDD-HHMMSS. That way, if I ever need to recreate the Arduino program I can easily find the exact libraries and .ino file that were used to create the program.
It may help if I illustrate the directory structures that I have in mind.
For writing code ...
For compiling Master.ino
ArduinoTemp.ino (which is a copy of Master.ino)
For archiving Master.ino
arduino-1.5.6-r2-linux32.tgz (the file containing the Arduino IDE with its libraries)
Someone may say that this involves a lot of duplication of files on my hard disk, and that is quite true. But hard disk space is too cheap to be bothered by that.
All comments on either or both Replies will be very welcome. | OPCFW_CODE |
2 Answers. Generally, there is no difference in calling a static method or instance method in same class with exception that static method can not access instance methods or variables. Make sure class name, access modifier, argument/parameter type is correct in called static method .
Qualifying a static call From outside the defining class, an instance method is called by prefixing it with an object, which is then passed as an implicit parameter to the instance method, eg, inputTF. setText(""); A static method is called by prefixing it with a class name, eg, Math.
Additionally, how do you call a method in another class without static? To call the method we need to write the name of the method followed by the class name. In non - static method, the memory of non - static method is not fixed in the ram, so we need class object to call a non - static method. To call the method we need to write the name of the method followed by the class object name.
If you know Java a little bit you know the answer: no, it can not. A static method belongs to the class and not the instance. It can even be executed using the name of the class directly without any instance of the class.
In Java, a static method is a method that belongs to a class rather than an instance of a class. The method is accessible to every instance of a class, but methods defined in an instance are only able to be accessed by that member of a class.
Below is a list of answers to questions that have a similarity, or relationship to, the answers on "How do you call a static method in the same class?". This list is displayed so that you can easily and quickly access the available answers, without having to search first.
Java program's main method has to be declared static because keyword static allows main to be called without creating an object of the class in which the main method is defined. In this case, main must be declared as public, since it must be called by code outside of its class when the program is started.
Of course, they can, but the opposite is not true, i.e. you cannot obtain a non - static member from a static context, i.e. static method . The only way to access a non - static variable from a static method is by creating an object of the class the variable belongs.
Answer is, No, you can not override static method in Java, though you can declare method with same signature in sub class. It won't be overridden in exact sense, instead that is called method hiding. As per Java coding convention, static methods should be accessed by class name rather than object.
Static reference e to non - static variables println(MyClass. But, to access instance variables it is a must to create an object, these are not available in the memory, before instantiation. Therefore, you cannot make static reference to non - static fields (variables) in Java.
Difference between static and non static nested class in Java. 1) Nested static class doesn't need reference of Outer class but non static nested class or Inner class requires Outer class reference. You can not create instance of Inner class without creating instance of Outer class .
Static members are not instance members , these are shared by class, so basically any instance method can access these static members . Yes, a static method can access a non - static variable . This is done by creating an object to the class and accessing the variable through the object.
You cannot refer non - static members from a static method . Non - Static members (like your fxn(int y)) can be called only from an instance of your class. or you can declare you method as static . A static method can NOT access a Non - static method or variable.
An instance , in object-oriented programming (OOP), is a specific realization of any object. An object may be varied in a number of ways. Each realized variation of that object is an instance . The creation of a realized instance is called instantiation. Each time a program runs, it is an instance of that program.
Static methods cannot be overridden because method overriding only occurs in the context of dynamic (i.e. runtime) lookup of methods . Static methods (by their name) are looked up statically (i.e. at compile-time). Method overriding happens in the type of subtype polymorphism that exists in languages like Java and C++.
A reference cannot be made from a static to a non - static method . To make it clear, go through the differences below. Static variables are class variables which belong to the class with only one instance created initially. Whereas, non - static variables are initialized every time you create an object for the class.
A non - static method does not have the keyword static before the name of the method. A non - static method belongs to an object of the class and you have to create an instance of the class to access it. Non - static methods can access any static method and any static variable without creating an instance of the class.
Normally, no, as that violates the definition of a static method -- they cannot depend on an instance (as instances have fields that make them stateful). However, if you pass an instance of an object into a static method , then the static method can use that instance to call an instance method .
Local variables in static methods are just local variables in a static method . They're not static , and they're not special in any way. Static variables are held in memory attached to the corresponding Class objects; any objects referenced by static reference variables just live in the regular heap.
A class method is a method which is bound to the class and not the object of the class . They have the access to the state of the class as it takes a class parameter that points to the class and not the object instance. For example it can modify a class variable that will be applicable to all the instances.
A static method can be called directly from the class, without having to create an instance of the class. A static method can only access static variables; it cannot access instance variables. Since the static method refers to the class, the syntax to call or refer to a static method is: class name.
The main method does not have access to non - static members either. By creating an object of that class. The main () method cannot have access to the non - static variables and methods , you will get “ non - static method cannot be referenced from a static context” when you try to do so.
Free PDF Ebook
200 Hardest Brain Teasers Mind-Boggling Puzzles, Problems, and Curious Questions to Sharpen Your Brain
Disclaimer for Accuracy of Information: "This website assumes no responsibility or liability for any errors or omissions in the content of this site.
The information contained in this site is provided by our members and on an "as is" basis with no guarantees of completeness, accuracy, usefulness or timeliness."
|QnA by Community - Overall Statistic 2021|
|Number of Topics||750+| | OPCFW_CODE |
M: Judy arrays are patented - CesareBorgia
http://en.wikipedia.org/wiki/Judy_array#Drawbacks
R: gojomo
Unless you are a patent lawyer/expert specifically in someone's employ,
pointing out that in your layman's opinion that "something appears to be
patented" isn't doing anyone any favors, and may in fact be harming them.
Patents use their own specific, strange language. The claims, as modified by
other precedents, only apply in certain specific situations which may be
different than what a casual reading would imply. And, for any of dozens of
reasons the holder may not be interested in ever trying to enforce the patent.
So simply by raising the possibility, causing attention to be drawn, and
uninformed discussions to be spawned, people's time is being wasted. If they
become uneasy, or start spending engineering effort to 'work around' something
that they hardly understand and that may never be enforced, more time is
wasted.
And by getting more eyes on the fuzzy patent, you may have put more
people/projects at risk of treble damages for 'willful infringement', in the
rare case where the patent is actually enforced later, or undermined their
ability to make a case for obviousness (because many teams came up with the
same approach without seeing the patent).
The better policy is to ignore such "appears to be patented" reports, unless
and until there's a credible threat from the holder(s) to enforce in specific
ways, as checked by experts. Let these patents (and panicked overbroad
interpretations) wither away in unenforced obscurity.
R: nitrogen
The fact that it takes a lawyer to even guess whether a patent applies, and
that the typical strategy is to keep a low profile and hope nobody notices, is
itself a serious flaw in the patent system. Any system in which it's
impossible to predict ahead of time what is safe or legal is broken.
R: jlgreco
Not to mention the fact that engineers are routinely advised to never read
patents, since doing so would increase liability when they inevitably
independently reinvent the same obvious idea.
R: waps
Humans are a learning algorithm in a body. Very, very few learning algorithms
incorporate original thought, because it's computationally expensive.
Extremely, extremely expensive. Genetic algorithms are close to the only ones
and the best reason to use generic algorithms is when you have no example data
whatsoever, and even with datacenters full of machines, a lot of patience is
required. Otherwise, genetic algorithms are going to get clobbered in
performance by almost every other algorithm.
Which brings me to my point : imho the chances that humans are capable of real
original thought is nil. Don't get me wrong, humans are very capable of
creatively combining ideas from very different disciplines and non-human
sources to arrive at surprising insights and works. But I'm pretty sure humans
are not in fact capable of creating something out of nothing, even when it
comes to intellectual works.
R: kleiba
At the same time, Doug Baskins, the author of Judy, open sourced an
implementation under the LGPL [1]. This license insists "that any patent
license obtained for a version of the library must be consistent with the full
freedom of use specified in this license." [2]
[1] <http://judy.sourceforge.net/downloads/10minutes.htm>
[2] <http://www.gnu.org/licenses/lgpl-2.1.html>
R: wheaties
You can patent a data structure!? Seriously? This is straight up an abstract
idea. I think non-abstract patents are beneficial and help society. This is
just plain idiocy.
R: hcarvalhoalves
Regarding patents, people always complain a particular patent couldn't have
been granted because it's not novel enough, anyone could have thought it, or
that it's math.
Patent law is a black-on-white subject, either you support it or not. It's
impossible to grant _some_ patents under subjective premises and be fair at
the same time.
R: wheaties
As others have said, it is not black and white. You can support patents on
"things" and not patents on "ideas." There's a big difference.
I can tell you mathematically how a windmill grinds corn but that does not
make my description into a windmill. On the other hand, laying down the
mathematical description of a Judy Array IS the array itself!
R: hcarvalhoalves
You're saying the description of how a windmill works is not a description of
_a_ windmill, while the description of how the array works is _a_ array.
How is that true? Isn't a description a model [1], never the thing itself?
[1] <http://en.wikipedia.org/wiki/Conceptual_model>
R: aspensmonster
Since the article has been updated and the "Drawbacks" section removed, here's
the diff showing the original contents of the "Drawbacks" section that the
story linked to, along with the deletion by user Fintler:
[http://en.wikipedia.org/w/index.php?title=Judy_array&dif...](http://en.wikipedia.org/w/index.php?title=Judy_array&diff=532589496&oldid=531806885)
"Removed speculation that this subject is related to the referenced patent.
Wikipedia is not a crystal ball or a place to discuss how the law MAY be
applied."
R: rossjudson
It's released under the LGPL, according to its COPYING file, by HP. I am
pretty sure that means the patent doesn't matter; HP is granting you a license
to use it.
R: mpyne
Strictly speaking the LGPL is a license that relates to copyright, not
patenting. A separate patent license would also be required, unless you use a
copyright license that also includes patent terms (I haven't reviewed LGPL in
awhile to confirm, and obviously you need to specify about what exact version
of LGPL you're referring to anyways).
R: 0x09
Any GPL (any version) explicitly forbids distributing the software with
external conditions restricting its ability to be redistributed. That puts it
at odds with almost any conceivable patent license.
R: mpyne
I think you're underestimating the creativity of patent lawyers a bit.
For instance, just off the top of my head, consider this: "OK, you can
distribute the source all you want, but as soon as you compile it, you've
created a patented product, the binary of which you _can't_ distribute."
After all, the patent itself is supposed to give all the description one needs
of how to implement the patented invention, it's when you fix the patent into
something real that you violate the patent.
Edit: What drove me to mentioning the separateness of patents and copyright
was some of the discussion around GPLv3, e.g.
[http://fsfe.org/campaigns/gplv3/patents-and-
gplv3.en.html#Ex...](http://fsfe.org/campaigns/gplv3/patents-and-
gplv3.en.html#Explicit-patent-grant)
R: batgaijin
As well as the doubly-linked list
[http://www.google.com/patents?id=Szh4AAAAEBAJ&printsec=a...](http://www.google.com/patents?id=Szh4AAAAEBAJ&printsec=abstract#v=onepage&q&f=false)
R: dakimov
This is insane. The US patent system is beyond retarded.
R: millrawr
I actually asked about this on the mailing list some time ago:
<http://comments.gmane.org/gmane.comp.lib.judy.devel/244>
tl;dr: patent was done for defensive reasons.
R: zrail
This has been true for quite a long time, and IIRC is why they're not more
widely used. That and they're pretty complicated to implement properly.
R: beagle3
They are only better for some workloads and not others. e.g., they are
excellent for accessing data in-order, but are worse than a very simple hash
table for random access:
[http://preshing.com/20130107/this-hash-table-is-faster-
than-...](http://preshing.com/20130107/this-hash-table-is-faster-than-a-judy-
array)
(yes, this hash table is vulnerable to timing attacks; point is, for many
workloads Judy brings in considerable complexity but is actually inferior to
other solutions).
R: snogglethorpe
Just an aside, but man, that's a refreshingly readable and pleasant article...
R: ww520
Are we talking about the implementation is patented? Or the algorithm itself
is patented?
You can't patent an algorithm, at least not in the U.S. The expression of an
algorithm can be patented. Patent lawyers often tell people to replace an
algorithm with a system, which is an expression of the algorithm.
R: wmf
Patents tend to be written like " _any_ system that implements algorithm X"
and " _any_ machine-readable medium containing software that implements
algorithm X", which is equivalent to a patent on the algorithm itself.
R: cbsmith
In case of patent lawyers, break glass and extract HAT-trie or crit-bit tree.
R: codeulike
using System.Collections.Patented;
R: dakimov
That's no problem, actually, even if it is patented, because Judy Array is not
a concise algorithm or a data structure, but instead a compilation of a number
of well-known unpatented data structures and algorithms, so you can basically
change a couple of algorithms used there, and get out of the patent. | HACKER_NEWS |
If you know a little bit about physics, you know that the speed of light is around 300 million meters per second. If you know a bit more, you know that the exact figure is 299,792,498 meters per second. If you know just a bit more, you know that neither of those are necessarily true.
Here's the problem: "The Speed of Light" is a bit of a misnomer, which is probably one of the reasons scientists tend to just call it c. A more accurate definition of c would be "The Speed Limit of the Universe," because 299,792,498 meters per second is the fastest that any energy, matter, or information can possibly travel. It so happens that light is the only thing we know of that can reach that speed.
|There are contenders, but we haven't quite gotten there yet.|
What that doesn't mean, however, is that light always travels at c. In fact, light only travels at "light speed" in a vacuum. You'll note that the entirety of Earth is not, to our great benefit and relief, a vacuum. We have a whole atmosphere that lets us breathe and stuff.
|That's not to say we don't have some perfectly nice vacuums on Earth|
The effect of the atmosphere on light is relatively small. It shaves off about 90,000 meters per second from light speed, which is a drop in the bucket. "So what's the big deal," you might say, "that's more or less the same. What's the difference?" To which I'd respond, "Are you inside?"
Because if you are inside, the light you're seeing is traveling significantly slower. Even if it's natural light coming through a window. Glass alone will slow down light by almost a third. These are just natural processes that slow light down. If you put some effort into it, you can make light practically crawl. Physicists at Harvard University, led by Lene Hau, used a bizarre state of matter with densely packed, super-cold atoms to slow light to 17 meters per second. That's 38 miles per hour. That's like you're morning commute, if you don't take the highway. You could beat light to work, depending on the traffic.
A few years later, those same physicists succeeded in turning light into matter and making it just stop. They then revived it and started it moving again a short distance away. So, congratulations. Any time you move, you are travelling faster than light...given the right conditions.
"Bolt200" by Jmex - Own work. Licensed under CC BY-SA 3.0 via Commons
"Робот пылесос Roomba 780" by Nohau - Own work. Licensed under CC BY-SA 3.0 via Commons
"Sacrumi". Licensed under CC BY-SA 3.0 via Wikipedia - No offense intended :-) | OPCFW_CODE |
I’m not what you’d call an expert developer. Hell, I’m hardly a developer as is. I only know basic coding for games, and I generally work with premade engines like RPG Maker and such so I only have to worry about the game itself. Even so, I find that developing a game, especially as a solo developer, has been a whole chore in itself.
For well over a year, I’ve been using RPG Maker MV to create a story-driven fantasy RPG called The Crystal’s Tale. This game is inspired by the plot of the first novel I have ever written and maintains the original concept while taking the many things I’ve learned over the past 17 years as an author into consideration. And in that past year, I have completed the Prologue chapter, and as of this article being written, I am still not done with Chapter One.
It’s not like what I’ve created is short, either. For only being the prologue and first chapter, the game has quite a bit of substance to it, lasting almost four hours long counting the duration of the optional dungeon. (Even then, the optional dungeon takes up about an hour or less depending on when you choose to go in.) But still, for how long I’ve been working on it, you would expect that I would be a little further along in the process. That’s what I assumed, at least. That’s a long time to be working on a game.
But of course, there are more factors in this process than have been accounted for. Life events and changes, work, other creative endeavors (I’m an author first and foremost, so the game comes secondary to my writing work), the list goes on. However, even if you take those out of the way, I still probably wouldn’t have been finished with the first chapter yet. Why is that? Because of the work that goes into making a game by yourself.
When you’re a solo developer, you are the whole dev team. You’re the writer, the programmer, the artist, the music composer, the director, the producer, and so much more, even when using an engine as simple and easy to use as RPG Maker. RPG Maker has some amazing artwork, music, and sound effects built into the engine, which are amazing as placeholders or if you just want to make a game with the default assets. I’m using the art in the engine for now since doing all my art for this game that will potentially last 30-80 hours would take much longer, and I want the base game finished before I do all of that.
However, there is something I am doing that impedes my progress, and that is composing every single track in the game.
I have a background in music. Nothing extensive; I took four months of music theory, eight years of choir, a couple years of musical theatre, and I’ve been experimenting in music composition since I was 14 years old. And as a fan of video game music, developing my own game and putting my own music in it sounds ideal. But with this, the problem lies with the fact that I want the soundtrack to have a unique song for almost every situation. I even want the main battle theme to change every time you start a new chapter. Doing this, though, results in me stopping the progress of my game for weeks, sometimes months, until I get the music I want written. That’s just how my work flow has been, since I’m not always in the mood to work on my music.
With that being said, it will likely take plenty of time for me to get this game finished, especially as a solo developer who is way too determined to make sure the soundtrack is as good as can be. However, I am excited to share it with you guys. I plan on releasing it completely for free to the public once it is finished, and I will provide updates here! If you are familiar with RPG Maker and have any suggestions or tips or anything, feel free to let me know!
Here are some samples of the game’s soundtrack so far! | OPCFW_CODE |
M: Ask HN: Would completely disabling SSH be a good idea to make a server secure? - andrewstuart
As a backstop, in case hackers find some vulnerability via some other software on the system.<p>Maybe if there was no ssh running then they still couldn't access the machine?
R: bifrost
You could although honestly you'd have to do it inside of a non-user-
modifiable environment. If you were using FreeBSD, you'd setup an SSLVPN to
the parent host IP and have SSH enabled over that. Then you'd put in a
firewall rule on the parent host that denied ssh out from the jail. Then you'd
run your app/etc under a Jail. You could do some of this with securelevels too
but if they ever popped the kernel you've got a bit more "gameover".
R: smacktoward
If they can get root access (which they'd need to modify SSH) via a hole in
some other package, it's game over whether you have SSH turned on or not; they
could turn it on themselves, or install it if it isn't installed, or install a
backdoor running over some other protocol instead. | HACKER_NEWS |
Norm Life Cycle Model
Social norms characterise the social glue that makes the coordination and cooperation amongst individuals work. Rules have no effect without the social norms that shape and accommodate them. An example is corruption: While most countries have rules against corruption, the prevalent norms are conflicting and overruling the legal perspective in many contexts. And while rules may be adapted infrequently, norms underlie a constant flux based on their interpretation and changing environmental circumstances.
To better understand, operationalise, and foremostly analyse normative dynamics, a coherent description of norm dynamics is thus necessary, which have been reflected in various norm life cycle conceptions (e.g., Finnemore and Sikkink (1998), Savarimuthu and Cranefield (2009; 2011), Hollander and Wu (2011), Mahmoud et al. (2014)), as visualised schematically in the following and discussed in Frantz and Pigozzi (2018).
General Norm Life Cycle
As part of a comprehensive review of existing norm life cycles in the multi-agent systems literature, we have a developed a synthesised general norm life cycle that integrates shared patterns of the existing life cycle models, resolves ambiguities and fills conceptual gaps. We provide a condensed description of the model below. (For details, see p. 522 onwards in C. K. Frantz and G. Pigozzi (2018): Modelling Norm Dynamics in Multi-Agent Systems, Journal of Applied Logics – IfCoLoG Journal of Logics and their Applications, vol. 5, no. 2, pp. 491-564 [PDF]).
It fundamentally identifies six essential processes in the life cycle of a norm (as shown in the figure below), namely Creation, Transmission, Identification, Internalisation, Enforcement, and Forgetting, which we discuss in the following. Additionally, the model characterises the macro-level phenomena of Norm Emergence and Norm Evolution.
The inception of a norm can either occur based on Creation, i.e., explicit specification of a formal rule (e.g., prohibition of smoking in public) that requires the inception of a new social norm that accommodates this rule. Alternatively, norms may well exist but their creation may be unknown to the observer. In this case norms are only observed once socially entrenched, in which case the individual only becomes aware of a norm by identifying it (e.g., travelling to a new country, observing the “local ways” and identifying behavioural patterns).
Whether created intentionally or identified by observation, in order to be adopted norms need to be transmitted. This can occur in passive way (e.g., by following norms in the public space, such as not littering or waiting in front of red lights) that makes such norms observable and accessible to social learning. Alternatively, norms can be transmitted in an active way, such as communication (e.g., telling someone else about tipping behaviour in a given country) or enforcement/punishment (e.g., being scolding for not tipping when expected).
Independent of the observation and identification of a norm, the process of internalisation involves the integration of the new normative content with the existing individual’s normative understanding. This may involve the adjustment of the norm (e.g., by subjective interpretation), the normative understanding (e.g., existing internalised norms), or rejection of the norm (the fact that a norm can be identified does not necessarily mean that an individual accepts and follows it). However, individuals may even internalise norms they do not follow (i.e., having an internalised understanding of the functioning of a norm). The existence of sociopaths is an illustrative example in this context.
To facilitate its emergence, the internalised norm is then enforced in some way, which may include self-directed enforcement that addressed oneself (e.g., adherence to etiquettes) and leads to personal reinforcement, or socialised external enforcement which is targeted at other individuals (e.g., bystanders’ scolding of jaywalkers). External enforcement implicitly feeds back into the transmission process that leads to further spread of the norm.
However, norms do not only emerge and are reinforced (whether internally or externally), they can also decay and be substituted by norms that are better geared at addressing situational context. This process can be characterised as forgetting.
In addition to those fundamental processes exist meta processes, that capture the dynamics of the operation of the fundamental processes and can be understood as macro-level phenomena arising from the operation of these underlying processes. Those include norm emergence, that is, the recurrent identification, internalisation and socialisation (transmission/enforcement) of norms leads to its penetration of the social environment (e.g., group, society at large). Looking at the long-term perspective, the salience of norms can change based on their relevance, which is expressed as variation of norm reinforcement (as expressed in the emergence meta process). This process of norm evolution can involve the evolution of norms (e.g., change and adaptation over time), and they could either evolve (e.g., are used in different contexts detached from the original meaning – think about the English idiom “cutting to the chase”), or simply be forgotten (Example: a norm of unlocked house doors is abandoned in the light of increasing crime).
One of the core aspects of delineating a general norm life cycle is to reflect the potential dynamics that norms underlie. This can relate to to the salience of a norm, its spread, but also its change in normative content. Specifically with respect to the latter aspect, the dynamics of norms can either be explicit based on intentional modification of an internalised norm before socialising (i.e., enforcing) it. Independent of this, norms will be modified unintentionally during different norm life cycle processes. For example, during norm transmission information can be lost or erroneous information be introduced. During identification, sensory biases or constraints of the individual may affect the internal representation of the norm. When (and if) internalising the norm, cognitive biases, intentional modification (i.e., reinterpretation) may further lead to modification. In the context of enforcement, finally, the choice of enforcement, characteristics of enforcer(s) and relationship between enforcer and enforcement target can affect the normative content. The table below summarises these key sources of modification in the life cycle process.
Life cycle processes and associated research fields
Observing the specified processes, one can find that some of those operate on the individual level and are concerned with the norm representation on the micro level, whereas other concentrate on the spread and diffusion of norms in the social environment (meso/macro level). The associated processes can thus be characterised as either related to Identification (Identification, Internalisation, Forgetting) or Emergence (Enforcement and Transmission), as shown in the figure below. This is in alignment with the differing objectives, which involve the representation of norms, the processing of conflicting norms as well as their priorisation in the context of Identification, referred to as Norm Synthesis. In the context of Norm Emergence, the prevalent theme is the rate of diffusion, emerging network structures and norm convergence. For a comprehensive overview of contributions to both fields, please have a look at Pages 516 and 546 of Frantz and Pigozzi (2018).
C. K. Frantz and G. Pigozzi (2018): Modelling Norm Dynamics in Multi-Agent Systems, Journal of Applied Logics – IfCoLoG Journal of Logics and their Applications, vol. 5, no. 2, pp. 491-564 [PDF]
M. Finnemore and K. Sikkink (1998). International norm dynamics and political change. International Organization, 52(4):887–917, 1998.
C. D. Hollander and A. S. Wu (2011). The current state of normative agent-based systems. Journal of Artificial Societies and Social Simulation, 14(2):6, 2011.
M. A. Mahmoud, M. S. Ahmad, M. Z. M. Yusoff, and A. Mustapha (2014). A review of norms and normative multiagent systems. The Scientific World Journal, 2014, Article ID 684587.
B. T. R. Savarimuthu and S. Cranefield (2009). A categorization of simulation works on norms. In G. Boella, G. Pigozzi, and L. van der Torre, editors, Normative Multi-agent Systems, Dagstuhl Seminar Proceedings 09121, pages 39–58, Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl, Germany, 2009.
B. T. R. Savarimuthu and S. Cranefield (2011). Norm creation, spreading and emergence: A survey of simulation models of norms in multi-agent systems. Multiagent and Grid Systems, 7(1):21–54, 2011. | OPCFW_CODE |
Typically, people love to remain when its hands totals ten or maybe more. These pages have various Black-jack video game offered to enjoy on the internet, along with antique versions and book differences to complement all the player’s liking. As well, we offer a listing of necessary Blackjack gambling enterprises for professionals just who look for a knowledgeable on the web Black-jack feel. Blackjack is a well-known casino video game liked because of the players worldwide to own their enjoyable game play and you will prospect of big victories. Black-jack is actually a casino card game variation of your financial game Twenty-You to.
- Western players like on line blackjack, and who’ll fault her or him?
- American Black-jack out of Pragmatic Enjoy are a highly preferred option for both the individuals getting into Black-jack and those who had been to play it for a long time.
- Certain sites features each other free blackjack and blackjack game for money.
- The object of your online game from Black-jack is basically to find more items than the broker instead of going over 21.
When the things are under control, the new casino will send your a message stating that your account has been made. The platform implies that the principles and you may game play aspects are simple and you may user-friendly, so it is possible for both beginners and you may experienced people to engage on the video game. The fresh excitement of Blackjack will be based upon and make strategic choices, watching the fresh dealer’s cards, and you can reaping cash perks if your steps pay off. Your ultimate goal should be to have the complete worth of the notes started as close to help you 21 you could instead going-over, while also which have a high overall versus agent. Split – if you get two cards of the same value, you might split up them for the a couple independent hands.
How can i Initiate To try out Las vegas Remove Blackjack?
After that you will need to sign in and make sure the percentage approach. After you’ve authored a free account at your selected on-line casino, put your preferred commission https://vogueplay.com/tz/heart-bingo-casino-review/ method and gives expected confirmation data files so you can make sure security. Getting the concepts right helps try to resist the newest family edge and win. So it how to play black-jack guide is an excellent location to start for many who range from zero. If you do not want to make bonus channel and make looking for the best bonuses to have black-jack the initial step of your own gambling experience.
The best Real cash Online Blackjack Bonuses
That it promotion will provide you with free borrowing from the bank to play a real income black-jack online game. According to the bonus’s terminology, players is withdraw hardly any money they winnings. Of numerous blackjack people need to behavior having 100 percent free models of the video game. It 100 percent free blackjack practice enables them to try out individuals steps and really know its opportunity before starting real cash blackjack game. Game developers have put out a variety of apps in which players can enjoy societal video game of blackjack.
Basic On the internet Blackjack Method
We should has a spot full that is greater than the brand new agent but that’s twenty-one or quicker. The new broker have to hit til they have a spot complete of at the least seventeen. Should your Black-jack agent have a four, four or half a dozen, do not take any chances! Double their turn in these situations for individuals who certainly would not chest. The game have a tendency to bargain cards according to its order on the hash from step three.
Whenever Aces can be found inside a hands, the full demonstrated to help you its best is short for the highest rating, not more 21, that could be made from those people cards. Deal with notes matter because the ten points, Aces is generally measured while the either one otherwise 11. Some other notes try measured considering the numeric really worth.
Free online Black-jack Games Playing For fun
Specific movies are designed purely to own amusement, and you may as the they offer the video game, they supply absolutely nothing so you can change your probability of successful. For individuals who avoid to think about the best property-based gambling enterprises which can be found international, Vegas without doubt springs in your thoughts. Sin city try jam-packed full of esteemed casinos where you are able to have fun with the game out of 21 from the very unbelievable away from surroundings. Yet other than Vegas, you can find better gambling enterprises which feature black-jack found the around the globe.
It system can get some operators with free sites black-jack indexed in the webpage. Yet not, the above ratings is actually for the majority of of the very most best free on line black-jack organization. To experience blackjack on the internet 100percent free is excellent practice to own whenever players plan to play for a real income. For the most part, the brand new payment avenues listed above are eligible for distributions. | OPCFW_CODE |
After install Jetpack 3.1, I get the camera_recording from tegra_multimedia_api/samples/10_camera_recording.
Only the on board camera is available on my Jetson TX2.
The log is as follows:
Set governor to performance before enabling profiler
LoadOverridesFile: looking for override file [/Calib/camera_override.isp] 1/16LoadOverridesFile: looking for override file [/data/nvcam/settings/camera_overrides.isp] 2/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/camera_overrides.isp] 3/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/camera_overrides.isp] 4/16LoadOverridesFile: looking for override file [/data/nvcam/camera_overrides.isp] 5/16LoadOverridesFile: looking for override file [/data/nvcam/settings/e3326_front_P5V27C.isp] 6/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/e3326_front_P5V27C.isp] 7/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/e3326_front_P5V27C.isp] 8/16---- imager: No override file found. ----
PRODUCER: Creating output stream
(Argus) Error NotSupported: Failed to initialize EGLDisplay (in src/eglutils/EGLUtils.cpp, function getDefaultDisplay(), line 75)
(Argus) Error NotSupported: Failed to get default display (in src/api/OutputStreamImpl.cpp, function initialize(), line 80)
(Argus) Error NotSupported: (propagating from src/api/CaptureSessionImpl.cpp, function createOutputStreamInternal(), line 565)
PRODUCER: Launching consumer thread
(Argus) Error BadParameter: NULL output stream (in src/eglstream/FrameConsumerImpl.cpp, function create(), line 25)
Error generated. main.cpp, threadInitialize:142 Failed to create FrameConsumer
Error generated. /home/nvidia/tegra_multimedia_api/argus/samples/utils/Thread.cpp, threadFunction:126 (propagating)
Error generated. main.cpp, execute:440 (propagating)
How can i fix this issue?
By the way, when I was installing the Jetpack 3.1, some lib file failed to be installed such as libopencv4tegra, libopencv4tegra-dev, cuda-toolkit-8-0. Could this be the reason for camera_recording failed? | OPCFW_CODE |
Repositive is now indexing a wide variety of data from different population studies. These include; sources of data that are dedicated to the sequencing of a certain population (eg. The Kadoorie Biobank and GoNL) or many diverse populations (eg. SGDP); or studies that contain a large cohort of individuals from many different populations (eg. Estonian Biocentre Human Genome Diversity Panel). For more details about the population data we are indexing on Repositive, go to the end of this blog post.
Why is population data a valuable resource for the community?
In theory whole-genome sequencing allows for the complete characterisation of genetic variation in humans. However, this is not possible without studying many individuals from a wide array of populations. This is because different ethnic groups found in different geographic locations have different frequencies of genetic variations. Therefore, to link genetic variations to the environment or certain diseases one must perform large studies on diverse populations.
You need to study many diverse populations if you want to:
Cluster rare alleles by geography - 'geographic clustering'.
Investigate the risk factors of the common chronic diseases in the population.
Optimise the design of large-scale genetic association studies.
Study gene-environment interactions.
Gain insights into the dispersal of modern humans across the globe through history.
- Gain a greater understanding of evolutionary genetics.
"Investigating the medical and evolutionary impact of structural variation requires that we understand the distribution of such variation within a species and the factors influencing that variation: in other words, the population genetics of structural variation."
Donald F Conrad & Matthew E Hurles et al. ^1
Finally, by researching the genetic commonality across ethnic groups researchers hope to also provide a preliminary indication of whether genes involved in drug and enzyme metabolism are common or different across the ethnic groups. ^2
Pikachu variants - all one species but different genetic variants!
You would have to sequence all these Pikachu individuals and many more to know what is a common or rare variation, and which variations are associated with disease.
Sarah, Postdoctoral researcher in Sheffield:
"I'm trying to find causal variants associated with rare neurological disorders. I have a large dataset from patients and healthy controls, and by analysing this I have come up with a list of potential variants. I then compare this list to datasets, like the 1000 genomes, to see if these variants are common or rare. If they're rare then they are more likely to cause these rare disorders."
Adam, Principle Investigator in New Zealand:
"I was searching for genome wide association studies (which require large cohorts) or data banks where genotypic and phenotypic data might be available. I am using the genotypic and phenotypic data from the Kadoorie data bank for construction of a polygenic risk score for cardiovascular and respiratory illnesses."
A bit more detail about the sources of population data on Repositive
The following sources are on Repositive are dedicated to, or contain datasets dedicated to, the sequencing of one specific or many diverse human populations:
- GenomeAsia (Browse on Repositive)
- Kadoorie Biobank (Browse on Repositive)
- Singapore Genome Variation Project (Browse on Repositive)
- GigaDB population studies (Browse on Repositive)
All of which I discuss in more detail in my Having trouble finding Chinese genomic data? blog post.
I talk about the value and importance of this dataset in my Simons Genome Diversity Project - Now Featured on Repositive blog post.
I explain how the Estonian Biocentre Human Genome Diversity Panel dataset, from the Estonian Biocentre, brings us One more step towards reducing the ‘European Bias’ in another blog post.
- The 1000 Genomes Project (Browse on Repositive)
- Genome of the Netherlands (Browse on Repositive)
- The THL Biobank (Browse on Repositive)
I haven't talked about these data source before so I will go into a bit more detail below :-)
The goal of the 1000 Genomes Project, which ironically consists of data for 2,504 individuals from 26 populations, was to find most genetic variants with frequencies of at least 1% in the populations studied. The 1000 Genomes samples have proved a popular resource for molecular phenotyping experiments and investigating the associations between genetic variation and expression or measurements of epigenetic state.
GoNL will also serve as a reference panel for imputation in the available genome-wide association studies in Dutch and other cohorts to refine association signals and uncover population-specific variants.Must apply for access. The resource will be made available to the research and medical community to guide the interpretation of sequencing projects.
The THL Biobank
The Finland National Institute for Health and Welfare's (THL) Biobank contains unique resource of high quality longitudinal samples from the Finnish population. It stores collections of human biological samples and information associated with the samples that have been collected for research. The purpose of the THL Biobank is to maintain population-based data for use in future research.
Cover Image credit: Biomedical Genomics & Evolution Lab, Genomics in Health and Disease | OPCFW_CODE |
Event Entry Readme file for Event Entry Class
This is a class that can be used in a content Management system to generate publication dates and publication issue information.
The first version of the class was oriented to a weekly publication that covered events occuring in a particular week.
This version includes functions for monthly and quarterly. Semi-weekly is in the five year plan.
Here is an example of a page the had it's dates generated by this <a href="http://www.peggyjostudio.net/Events for week of 08-26-2013.htm" target="_blank">class</a>
Notice the volume and issue located underneath the blue box at the top.The volume abd issue were generated because the first issue of the newsletter was produced on May 31, 2003.
Over in the left column are links to the days in the week's events. The same dates were used to extract the events to include in the newsletter.
The script that creates the newsletter automaticaly rolls over on Tuesday to the next week so that work on other parts of the newsletter can begin.
We will be using the term next in this readme file. By next we mean that in the
current time we are working on a presentation for the next week, month or quarter.
Once that presentation is published, we will be working on the next presentation. Therfore, reffering to the previous, or edition= -1, we are reffering to something that has allready been published. Edition= +1 reffers to something that will be published after the next publication - a preview.
Calling the script
Use something like the following in your script:
< 1. Be sure to include the script file.
< 2. $ee= new eventEnty
< 3. $ee->set_begin_publication_date($date_begin); $date_begin in the form mm/dd/yy
< (the date that the publication first started publishing)
< 4. $date_arrray=$ee->getNextWeekDay("Mon")
< or whichever day of the week your publication will be published
< e.g. "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"
< You can also add a second parameter after a comma indicating which
< week you are going to publish. -1 indicates last week, +1 is this week
< which is the default, or +2 to get the week after.
< 4. The $date_array will contain the following dates:
< * date_begin in the form yyyy-mm-dd
< *date_end in the form yyyy-mm-dd
< *pubdate in the form November 26, 2012
< *pubdate1 in the form Mon. Nov. 26 Events
< *pubdate2 in the form Tue. Nov. 27 Events
< *pubdate3 in the form Wed. Nov. 28 Events
< *pubdate4 in the form Thu. Nov. 29 Events
< *pubdate5 in the form Fri. Nov. 30 Events
< *pubdate6 in the form Sat. Dec. 1 Events
< *pubdate7 in the form Sun. Dec. 2 Events
< *selectdate1 in the form 2012-11-26
< *selectdate2 in the form 2012-11-27
< *selectdate3 in the form 2012-11-28
< *selectdate4 in the form 2012-11-29
< *selectdate5 in the form 2012-11-30
< *selectdate6 in the form 2012-12-01
< *selectdate7 in the form 2012-12-02
Simpletest script included in this version.
In order to run these script, you would need to download the required
files from simpletest.org. The following is a description of the tests.
In all of the tests except the last one we are setting the test date to December 29, 2012. This overides the system date, which is used by the class to calculate dates, so that the test assertion values would not have to be changed each time the test was run. We are testing the logic of the class. When changes to the class are made, we can run the tests to make sure that every thing still works. Note the intervening dates for a period are only generated for weeks. all the tests verify that dates returned from the class are still valid.
1. Tests weekly dates for what was the upcoming week as of December 29, 2012.
2. Tests weekly dates for the week preceedimg December 29, 2012.
3. Tests weekly dates for the week after the upcoming week.
4. Tests monthly dates for the upcoming month as of December 29, 2012.
5. Tests quaurterly dates for upcoming quarter.
6. Tests quarterly dates for the third quarter of 2012 by setting the test date back to June 29, 2012. | OPCFW_CODE |
There are some masterfully fancy ways to do endings for your mission, but I'm not savvy enough to cover those just yet. Instead, let me show you some simple ways to end your missions and make it so the enemy can beat the players. Simple ways that only take a few moments to create. Triggers
are where you go to create your ending
. It is hard to generalize creating an ending, but basically with a trigger you can setup an ending with a fairly large amount of options. Let's just do an example.
I have a convoy mission where you need to destroy an entire enemy convoy. To make this work as a win condition I placed a trigger with a Axis A at 2500 and Axis B at 2500, which makes the trigger cover the entire island. I then set the Trigger to "trigger" End #1 if no OPFOR are present. Take a look at this screenshot to see what I mean.
So, with that I basically have it so that once every single unit in that convoy is destroyed (abandoned vehicles not counting) then the game will end. You can do some fancy things with the ending, but to make it simple I just added an effect that made a text pop up that said "good job" or some meaningless crap like that, while I also had it place one a track of a music.
You can access all of that in that trigger. If you can see the above screenshot really well, you might notice the Effects
box that is sitting next to the OK
box. That is where you can easily have it display text, play music, play sounds, or do numerous other things (all of which are pretty simple to do).
Let's create a simple list of how to do a Not present
type of ending.
1. Find the trigger button and place a trigger on the map.
2. Under the type drag down by box, on the right, select End #1.
3. Under activation select OPFOR or whoever you are fighting against.
4. On the button below that select once, so it is a one time event.
5. The box below the contains not present, present, Detected by Blufor/Opfor/Independent/Civilians. Select not present
6. On the top left you can modify the shape of this trigger's radius, but most importantly mess with Axis A and Axis B to make sure the enemies are inside that area.
7. To add text pop up, music or sounds once this ending triggers, go to Effects on the bottom right next to OK.
8. The game can now end successfully.
There are obviously a lot of variables that can be changed with this trigger and there are a great many ways you can end it. You can add an ending movie, an ending side mission and so much more, but for now let's just keep it simple. How to make it so you, or the players, can lose
Once again, we turn to the wonderful triggers tool. You know how with the victory condition we made it so that if enemies are no longer present in the triggers radius then the players win? Well we do something very similiar to make the players lose in this case. Back to my example of a Convoy mission.
So, if you want the players to lose
if vehicles get past them and get to an ending destination point then there are a few things you must do.
1. First off, make sure that you set up the enemy convoy with waypoints that will eventually lead them to a specific area that you want to be the end point.
2. On that end point throw down a small radius Trigger
, probably about 200 for AXIS A and B.
3. In this marker go to the Type drag down box and select Lose.
4. Under activation put OPFOR or whichever team is driving the convoy and against the players.
5. Make sure to check once below that, not repeatedly.
6. Check the present box.
7. Add effects (next to the OK box) if you want, like text that says "you guys suck" or "you lose" or even fancy failure music or sounds.
This will make it so that if the enemy convoy gets past the players and reaches that point then the players will lose, but you also have that other marker setup to allow for the players to win if they destroy the entirety of the convoy.
Personally I like to setup the map for the players so that they know where the convoy starts and where the convoy needs to go. You can do to this quite easily with simple markers. Example of one of my maps:
If it isn't obvious, there are a great many ways in which you can have your mission end in victory or failure. You can mess with the trigger button immensely to create many different types, but there are also advanced options that I will likely try to cover at some point. Any questions? Let me know. | OPCFW_CODE |
Dll, ProcessIdleTasks really clear memory? Filter to only view Current User' s Records in a Microsoft Access form query: There may be circumstances where you, as the database developer may only wish to allow the current user of the Microsoft Access database to view only their own records.Click here to see what advapi32 is doing how to remove advapi32. Here are the top five most common advapi32.
ADVAPI32 Functions. Download uiautomationcore.In computer security cryptography _ NSAKEY was a variable name discovered in Windows NT 4 Service Pack 5 ( which had been released unstripped of its symbolic debugging data) in August 1999 by Andrew D. These functions are divided into the following groups:.
That variable contained a 1024- bit public key. Certificate storage, certificate store functions manage the use, certificate revocation lists ( CRLs), retrieval of certificates certificate trust lists ( CTLs).My vb6 program was running on 32bit. SFMETA- INF/ CHAVE.
Find help installing the file for Windows useful software a forum to ask questions. Corrigir erro de DLL faltando.
Dll errors are related to missing or corrupt advapi32. H is a Windows- specific header file for the C all the data types used by the various functions , all the common macros used by Windows programmers, C+ + programming languages which contains declarations for all of the functions in the Windows API subsystems.
Hopefully anyone with a similar issue can be helped the same way. It defines a very large number of Windows specific.
The ability to manually clear memory cache essential when switching from one major intensively memory workload to another, buffers is critical else you' d have to depend on Windows somehow. Dll: AbortSystemShutdownA: advapi32.
This 64- bit program executes with the privileges as the currently logged in user account. An elevation of privilege vulnerability exists in the Microsoft Server Message Block ( SMB) server when an attacker who has valid credentials attempts to open a specially crafted file over the SMB protocol on the same machine.
MdThinkPHP/ Common/ functions. SmartPCFixer™ is a fully featured and easy- to- use system optimization suite.
On this page you can find and download dll files for Windows 10 64 Bit. Dll free dll download.Windows applications in Visual Basic 5 Introduction Best Practices. Save a credential into the windows key ring. Dll" SetLastError= true, EntryPoint= " CredWriteW" CharSet= CharSet. RSAassets/ icp_ brasil.
Now I have to move it to 64bit. NameServiceDescriptorMETA- INF/ JCE_ RSA.
Dll was not found. C# Signature: [ DllImport( " Advapi32.
Dll - original dll file, download here. Exe is an instance of a running program.
The lib that I declare below code, the system seem can' t get it. Repair your system.
Advapi32 download. HtmlApplication/ README.
Oct 12, · Update: The issue has been resolved. With it defrag disk, update windows, you can clean windows registry, remove cache files, fix errors download dlls.
This list contains by the associated DLL the APIs supported in full by Server Core. PhpPublic/ README.
Process - flashutil_ activex. Fix errors with missing dll files.
com Client to fix DLLerror automatically. Download Now: Smart Dll Missing Fixer Pro Software * Smart Dll Missing Fixer Pro will fix PC errors in 3 Steps! | OPCFW_CODE |
I have a COM Class in Vb.Net the code is like this
<comclass(class1.classid, class1.interfaceid,="" class1.eventsid)=""> _
Public Class Class1
#Region "COM GUIDs"
' These GUIDs provide the COM identity for this class
' and its COM interfaces. If you change them, existing
' clients will no longer be able to access the class.
Public Const ClassId As String = "74eb4206-6063-4ea1-b499-bfdd9f49f4bf"
Public Const InterfaceId As String = "ff2cdb8a-e1f9-464b-bb6f-ead5812d3630"
Public Const EventsId As String = "a8880b84-9a33-41f2-90f6-c6141835ee61"
' A creatable COM class must have a Public Sub New()
' with no parameters, otherwise, the class will not be
' registered in the COM registry and cannot be created
' via CreateObject.
Public Sub New()
Public Function MergeFiles(ByVal arrFiles() As String, ByVal strOutputFile As String) As String
When imported in VC++ (6.0) we are not able to get the function out. Any suggestions
Hmmm - you're using VB.NET but still using VC++ 6.0? That makes no sense what so ever...there's so much good support for importing COM objects easily in later versions of VC++...look up #import....
Open the type library for VB.NET DLL in the OLE/COM Object Viewer (open the Type Libraries branch of the tree view and select your project's type library - it'll be named after your project) to see what interfaces and classes the DLL exports. This'll show the DLL is properly registered and also show you what object you have access to.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
I'm using VC6 and the MapWindowGIS control, MapWindowGIS.OCX. It has many interfaces but few examples on how to use it using VC6. I'm kind of new to using COM so my terminology is probably incorrect to some degree.
The problem is I can get to one of the interfaces, IShapeFile, using their example using:
IShapefilePtr* pShapeFile = new IShapefilePtr;
HRESULT hr = pShapeFile->CreateInstance(__uuidof(Shapefile));
CreateInstance(), but when I try this on other interfaces such as IGrid and IGridHeader, CreateInstance() fails with the "Class Not Registered" error (0x80040154).
Since, the control, MapWindowGIS.OCX is registered, and I was able to get the IShapeFile interface, I'm assuming that I need to call QueryInterface() but the IGrid and IGridHeader interfaces are not part of the IShapeFile interface. So, I'm assuming then I need to get a pointer to the MapWindowGIS control but I'm not instantiating it directly since it is #import(ed).
VC6 created the .tli and tlh files and a CWnd derived class from the OCX called, CMap1. Is there something I can call in CMap1 that will return the main object's interface pointer so I can then call QueryInterface()?
I was able to get the IShapeFile interface, I'm assuming that I need to call QueryInterface() but the IGrid and IGridHeader interfaces are not part of the IShapeFile interface
No, but you normally would get, say, an IGridHeader interface by using the IShapeFile interfaces method QueryInterface asking it for an IGridHeader interface. All interfaces inherit from the base interface IUnknown which has three methods, QueryInterface being one of them. Having got one interface you can then use its QueryInterface method to get another interface implemented by the object.
Well, I'm kind of at a loss because I tried that with the IShapeFile interface (meaning I called QueryInterface for the IGrid interface on the IShapeFile interface) and I'm getting the E_NOINTERFACE error. That is why I was asking about how I get a interface pointer from the OCX control itself.
All of the examples are for VS 2008 in C# and VB.net. The only example they provide for VC6 is for creating a map using a shapefile hence, the IShapeFile interface.
I have the CLSID for the OCX is there some way I can get its main interface or IUnknown interface from the OCX using this CLSID ??
A search took me to the Mapwindow Wiki [^] where it says "An ESRI grid manager object provides functions which facilitate using ESRI grids with MapWinGIS". So it sounds to me as if you should be looking to work with the GridManager before trying to get an IGrid or IGridHeader.
This one can be a good alternative.
But its rather an out of the way we need to go. I think there would be some other straight froward methods might be available to have this information. like some flag or bit that will tell us about whether its COM PE or simple Win32 DLL /EXE
I am working on an OPC HDA Client where I am creating a Conenction with OSI PI HDA Server via below code. But at the time of Advise to the Server I am getting an error with error code = 0x80040202. The OSI HDA server is located on remote machine which has full DCOM configuration. I searched out and found that this happens due to CONNECT_E_ADVISELIMIT or CONNECT_E_CANNOTCONNECT. Below is the my code of Connect(). Can anybody help me what else I have to do to fix the problem.
#pragma region Code Added For AsyncOperations
ATL::CComPtr<::IConnectionPointContainer> subscripCpcObj = GetAtlInterface<::IConnectionPointContainer>((this->RawObject));
HRESULT hr = subscripCpcObj->FindConnectionPoint(__uuidof(IOPCHDA_DataCallback), &eventObjPtr);
// Get the servers side of the connection.IntPtr cbPtr = Marshal::GetComInterfaceForObject(this, OpcHdaEventSink::typeid);
callbackPtr = reinterpret_cast<IOPCEventSink*>(cbPtr.ToPointer());
// Advise for the events.
hr = eventObjPtr->Advise(callbackPtr, &cookie);
//Hold a reference to the connection point for shutdown.
this->callbackCookie = cookie;
this->callbackConnectionPoint = GetDotNetInterface<::IConnectionPoint>(eventObjPtr);
if(this->callbackConnectionPoint == nullptr)
OutputDebugString(L"HdaServer:->callbackConnectionPoint is NULL!");
I have created the COM component in VC6.0 using ATL 3.0
I am using it in VB6.0
I have added the BSTR Event.
I want to pass the string with NULL character in between the string from VC to VB
e.g. I want to fire a event with string "ABC\0XYZ"
So I can get the complete string in VB as ABCXYZ
- indicates box as unicode character.
But I am getting only "ABC" string.
Please tell me the way to pass the string with NULL character from vc to vb.
Note: Without seeing any of your code, I'm just guessing...
It's probably because when you create the BSTR, you let it count characters to determine its length, rather than telling it what the length actually is. Any string length function in C/C++ will see a NULL as the end of string.
So, create your BSTR like this, telling it what its length is:
BST b = SysAllocStringLen(L"ABC\0XYZ", 7);
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
As I recall you won't be able to do as you want, either the ATL Event Wizard generated code sees the internal NULL and loses the rest (even with the BSTR length set) or VB will see a a NULL terminated string. I'm pretty sure You'll have do do more one way or the other. Maybe pass an array of characters?
The NULL character is your problem. Try replacing it with some other unique Unicode character which both parts of your application can recognise. Alternatively put the box character in your source string in the first place.
I am using namespace MSXML in my application to traverse and read XML file.But I have some confusion regarding MSXML is that this one is related to c++ DOM APIs na? And what its relation with COM?
And these both COM AND DOM are provoding different APIs?and then what is DCOM?
Please Suggest me something.I am too confused.
Thanks & Regards
Last Visit: 31-Dec-99 19:00 Last Update: 30-Nov-23 5:22 | OPCFW_CODE |
|Catznip Release : R12.2|
These pages are intended for people wanting to assist with Catznip Development, Alpha and Beta testing, and compiling the viewer for Second Life themselves from source. Are you really sure you want to be here?
Viewer features may not be discoverable, user interface may be incomplete, non functional or entirely missing. There will be many bugs and crashes and this may incur asset loss, fires and lost productivity. You might not even be able to log in, or end up wishing you hadn't.
Alpha means broken. There is no support. INVITE ONLY.
We are always looking to expand our beta test team! This is NOT the place to come looking for the latest feature complete version of the viewer (See our Latest Release).
We really can't stress enough, as far as Catznip is concerned, BETA DOES NOT MEAN BETTER. There will be known issues, half finished features, incomplete UI and lots of weird behaviour. There is no support.
- You MUST use Catznip as your primary viewer.
- You will be a bit OCD with an urge to press every button you find, or repeat the same actions over and over just to find a repo.
- You must have an account on the Catznip JIRA and be able submit reports.
- There is no guarantee that you will be able to use the latest beta viewer.
- Updates are forced and might not work for you at all.
- Expect personal settings to be wiped without warning.
- Crash reports may submit significantly more data. Catznip:Privacy_policy.
To restate. If you just want to see the new Catznip, then our beta team is NOT for you. If you're still interested and feeling brave, head over to our Beta Testers page and dive in!
Compiling / Building the viewer
We do not support self builds of Catznip. We do not support self building at all. We will not help you debug your build environment and we don't promise any instructions provided here will be accurate, complete or even helpful. In short, you're on your own. May the force be with you.
This category has the following 16 subcategories, out of 16 total.
Pages in category "Development"
The following 23 pages are in this category, out of 23 total.
- Catznip 2.6 Release Notes
- Catznip 2.8 Release Notes
- Catznip 2.8.0 f RC 5 Release Notes
- Catznip 3.2.0 Beta Release Notes
- Catznip 3.2.0 R3 & R4 Release Notes
- Catznip R10 Release Notes
- Catznip R12 1 Release Notes
- Catznip R12 2 Release Notes
- Catznip R5 Release Notes
- Catznip R6 Release Notes
- Catznip R7 Release Notes
- Catznip R8 Release Notes
- Catznip R8.1 Release Notes
- Catznip R9 Release Notes
- Catznip Team
- Crash Debugger | OPCFW_CODE |
"""
Test module for UnivariateInput instances with a Beta distribution.
"""
import pytest
import numpy as np
from uqtestfuns.core.prob_input.univariate_distribution import UnivDist
from uqtestfuns.global_settings import ARRAY_FLOAT
from conftest import create_random_alphanumeric
def _mean(parameters: ARRAY_FLOAT) -> float:
"""Compute the analytical mean of a Beta distribution."""
mean = float(
parameters[2]
+ (parameters[3] - parameters[2])
* (parameters[0] / (parameters[0] + parameters[1]))
)
return mean
def _std(parameters: ARRAY_FLOAT) -> float:
"""Compute the analytical standard deviation of a Beta distribution."""
std = float(
(parameters[3] - parameters[2])
/ (parameters[0] + parameters[1])
* np.sqrt(
(parameters[0] * parameters[1])
/ (parameters[0] + parameters[1] + 1)
)
)
return std
def test_wrong_number_of_parameters() -> None:
"""Test the failure when specifying invalid number of parameters."""
name = create_random_alphanumeric(5)
distribution = "beta"
# Beta distribution expects 4 parameters not 6!
parameters = np.sort(np.random.rand(6))
with pytest.raises(ValueError):
UnivDist(name=name, distribution=distribution, parameters=parameters)
def test_failed_parameter_verification() -> None:
"""Test the failure when specifying the wrong parameter values"""
name = create_random_alphanumeric(10)
distribution = "beta"
# The 1st parameter of the Beta distribution must be strictly positive!
parameters = [-7.71, 10, 1, 2]
with pytest.raises(ValueError):
UnivDist(name=name, distribution=distribution, parameters=parameters)
# The 2nd parameter of the Beta distribution must be strictly positive!
parameters = [7.71, -10, 1, 2]
with pytest.raises(ValueError):
UnivDist(name=name, distribution=distribution, parameters=parameters)
# The lower bound must be smaller than upper bound!
parameters = [1, 2, 4, 3]
with pytest.raises(ValueError):
UnivDist(name=name, distribution=distribution, parameters=parameters)
def test_estimate_mean() -> None:
"""Test the mean estimation of a Beta distribution."""
# Create an instance of a Beta UnivariateInput
name = create_random_alphanumeric(10)
distribution = "beta"
parameters = np.sort(2 * np.random.rand(4))
my_univariate_input = UnivDist(
name=name, distribution=distribution, parameters=parameters
)
sample_size = 100000
xx = my_univariate_input.get_sample(sample_size)
# Estimated result
mean = np.mean(xx)
# Analytical result
mean_ref = _mean(parameters)
# Assertion
assert np.isclose(mean, mean_ref, rtol=5e-02, atol=5e-03)
def test_estimate_std() -> None:
"""Test the standard deviation estimation of a Beta distribution."""
# Create an instance of a Beta UnivariateInput
name = create_random_alphanumeric(10)
distribution = "beta"
parameters = np.sort(2 * np.random.rand(4))
my_univariate_input = UnivDist(
name=name, distribution=distribution, parameters=parameters
)
sample_size = 100000
xx = my_univariate_input.get_sample(sample_size)
# Estimated result
std = np.std(xx)
# Analytical result
std_ref = _std(parameters)
# Assertion
assert np.allclose(std, std_ref, rtol=5e-02, atol=5e-03)
| STACK_EDU |
Go over this Oracle 1Z0-868 e-book and also rush promptly into 1Z0-868 Review records when using the unrivaled Java Enterprise Edition 5 Enterprise Architect Certified Master Upgrade Exam ¡§C Oracle 1Z0-868 Process Review Companies only at Exambible. 1Z0-868 Process Review and also 1Z0-868 are generally remarkable during Excellent and also Exambible offer 100% make sure you cross the 1Z0-868 Review.
2021 Sep 1Z0-868 test questions
Q11. The current architecture of a fashion web site consists of one web server, three application servers, and a database. You, as the lead architect, recommend adding more web servers. What are two valid justifications for the new architecture? (Choose two.)
A. New web servers will decrease latency for I/O-bound requests.
B. Adding multiple web servers will have a positive impact on scalability.
C. Adding new web servers will increase the overall availability of the web site.
D. New web servers will increase the number of user accounts that can be supported.
Q12. What directly addresses a non-repudiation requirement?
A. input validation
B. identification and authentication
C. certificate authority (CA) certificates
D. encryption of a hash using a private key
Q13. A brokerage firm hired you to evaluate a re-architected CPU-bound application they use in-house to do market forecasting. This application is currently architected using a single business tier, where complex algorithms are used to process large amounts of data to forecast market trends. The current machine cannot be scaled vertically any further. A prototype was built, in which the business tier was split into two tiers, where one depends on services provided by the other. They were then deployed to two servers for testing. The prototype exhibited better scalability. Which statement would explain this result?
A. The applications deployed were simpler.
B. There were more resources available to process any one request.
C. There was additional network traffic between the two business tiers of the application.
D. The business model was simplified by partitioning it into tiers with well-defined limited interactions.
Q14. A teenage fashion website, has a multi-tier web application with 103 web servers, 12 middle-tier servers, and a large RDBMS server with more than enough capacity to support peak loads. You are the architect of the system, and you are concerned about reliability of the web application. Which change could you make to improve reliability?
A. add additional web servers
B. add additional database servers
C. add additional middle-tier servers
D. reduce the number of web servers
E. reduce the number of middle-tier servers
Q15. A bank designed its first-generation web-based banking system around a Java technology rich client application that interacts with server-side service objects implemented as stateful session beans in a portable Java EE application. For their second-generation system, the company wants to open the architecture to other types of clients. The company is considering exposing its existing stateful session bean service as a web service. Which statement is true?
A. Session beans cannot be exposed as web services.
B. Stateful session beans cannot be exposed as web services.
C. Stateful session beans are automatically exposed as web services.
D. Stateful session beans annotated with @WebService are exposed as web services.
Replace 1Z0-868 exam question:
Q16. What are three benefits of using the Data Access Object pattern? (Choose three.)
A. enables transparency
B. encapsulates access
C. enables easier database migration
D. simplifies the interface to business objects
Q17. An online sporting goods store's web application uses HTTPSession to store shopping carts. When the application is initially deployed, the business plan predicts only a few customers will access the site. Over time, the store projects a steady increase in volume. The deployment plan calls for a single web container in the initial deployment. As demand increases, the plan calls for multiple web containers on separate hardware with clustered HTTPSession objects. Which two principles will help the application meet the requirements and optimize performance? (Choose two.)
A. The application should store as much as possible in HTTPSession objects.
B. The application should NOT make frequent updates to HTTPSession objects.
C. The application should make coarse-grained updates to HTTPSession objects.
D. The application should create new HTTPSession objects instead of updating existing objects.
Q18. In order to handle your n-tier application's persistence requirements directly from web-tier components, which three statements about your application should be true? (Choose three.)
A. Your application will NOT need to use DAOs.
B. Your application has no need for an LDAP server.
C. Your application is such that scalability is NOT a concern.
D. Your application has no need for concurrency management.
E. Your application has no need for container managed transactions.
Q19. A company manufactures widgets for sale to distributors. Distributors call this company when they want to order more widgets. The company wants the distributors to send orders using XML documents over the Internet to reduce the number of data entry personnel needed. It has no control over the distributor's technologies. The company does not want the orders to impact the performance of the other users. You have been assigned the task of designing the new API. Which approach do you take?
A. design the API as a JMS queue
B. design the API as an RMI interface
C. design the API as a synchronous web service
D. design the API as an asynchronous web service
Q20. What are two capabilities of the Decorator pattern? (Choose two.)
A. provides a unified interface to a subsystem
B. converts the interface of a class into another interface
C. is used when the base class is unavailable for subclassing
D. promotes loose coupling by keeping objects from referring to each other
E. modifies responsibilities to individual objects dynamically and transparently | OPCFW_CODE |
$\lim_{N\to \infty}a\sqrt{2N^{2}+N+1}-bN=2,\text{then }a^{2}+b^{2}=??$
I have made two approaches, first that I differentiate the equation to respect to $N$ as I am assuming it will converge and finally I get $a\sqrt{2}-b=0$.
Second approach is to neglect lower degrees for $N$ which means $$\lim_{N\to \infty}a\sqrt{2N^{2}+N+1}-bN=\lim_{N\to \infty}a\sqrt{2N^{2}}-bN=2$$
$$\lim_{N\to \infty}a\sqrt{2}N-bN=2$$
$$\lim_{N\to \infty}a\sqrt{2}-b=\frac{2}{N}$$
$$a\sqrt{2}-b=0$$
Which reach the same point.
What should I do next?
I now for sure that $a^{2}+b^{2}=96$, as $a=4\sqrt{2}$ and $b=8$.
How could we prove it?
$$\lim_{N\to \infty}a\sqrt{2N^{2}+N+1}-bN=\lim_{N\to \infty}a\sqrt{2N^{2}}-bN=2$$
This is incorrect.
$$\begin{align}
L&=\lim_{N\to \infty}a\sqrt{2N^{2}+N+1}-bN\\
\\
&=\lim_{N\to \infty} \frac{a\sqrt{2N^{2}+N+1}-bN}{a\sqrt{2N^{2}+N+1}+bN}\cdot (a\sqrt{2N^{2}+N+1}+bN)\\
\\
&=\lim_{N\to \infty} \frac{a^2(2N^{2}+N+1)-b^2N^2}{a\sqrt{2N^{2}+N+1}+bN}~~~~~\Rightarrow 2a^2=b^2\tag{1}\\
\\
&=\lim_{N\to \infty} \frac{a^2(N+1)}{a\sqrt{2N^{2}+N+1}+bN}\\
\\
&=\frac{a^2}{a\sqrt2 +b}=2\tag{2}
\end{align}$$
Solve (1) and (2) and you get $a=4\sqrt{2}$ and $b=8$.
| STACK_EXCHANGE |
The Redmond, Wash.-based company is attempting to make Office System more attractive to larger businesses as demand for desktop application software begins to slow.
In March, Microsoft, which includes Office 2003, Project and other software. Directions on Microsoft analyst Paul DeGroot described the branding as an attempt make Office a "platform" that developers and businesses can use as a base for custom applications and business processes.
"As Microsoft moves to brand Office more as an enterprise business tool and less as personal productivity tool, we will see moves like this occur more often," said Jupiter Research analyst Michael Gartenberg.
Under new branding announced on Monday, Microsoft's business portal software will be called Office SharePoint Portal Server 2003. SharePoint, which offers collaboration tools intended to help companies enhance employee productivity, is primarily used to create private, companywide information Websites for employees or business partners.
But in order to use that software, businesses will have to first move to, a new server operating system that launches April 24.
"Yes, you do need Windows Server 2003 (to run SharePoint)," said Erik Ryan, Microsoft's product manager for SharePoint Portal Server. He said the added collaboration technology "makes the upgrade absolutely worth it."
The portal software also relies on features found in many other Office System products, such as Excel, Outlook and Word, so users will have to buy them to get the full benefit from SharePoint Portal Server.
SharePoint Portal Server is in beta testing. Microsoft plans to release the final version this summer, Ryan said. The company has not released pricing information.
Adding a server product to the Office System family could increase the legal scrutiny of Microsoft's business practices, analysts warned. The European Union continues an antitrust investigation into charges that Microsoft used its dominance in desktop software to gain a foothold in the server market.
"This (move) could be of concern for those who care about Microsoft's past antitrust (issues)," said Rich Gray a Menlo Park, Calif.-based antitrust attorney that closely follows the European Union investigation.
Gray said antitrust law enforcers would follow the move closely if "Microsoft's strategy is to use its full range of product offerings as part of a plan that reinforces barriers to entry around its monopoly product while extending the power of its monopoly into other product areas."
Upgrades on Microsoft's mind
The rebranding of SharePoint with Office highlights the longstanding problem Microsoft has with how to enhance the value of the productivity suite. Bulking up the features of this release is important because of the large number of businesses using older versions of the software.
A Marchconducted by Yankee Group and integrator Sunbelt Software found that 32 percent of existing Office users plan to upgrade in the first 12 months after version 2003 ships. About 5 percent of earlier adopters run Office 95 and 44 percent use version 97, according to the study.
Because the market for Office is saturated, adding new features to the desktop software may not be enough to drive upgrades. The company is banking that adding more features from the server software to Office will make that suite more appealing to businesses.At the same time, Microsoft would increase sales of other products. For example, to harness the full potential of SharePoint Portal Server, a company would also need to purchase Office, Windows Server and BizTalk Server.
"It is kind of unusual for a server product to leave the server fold," DeGroot said. But given that Office System is now part of Microsoft's Information Worker division, looked another way, the change makes sense.
Certainly, the portals created by SharePoint are about sharing information. SharePoint empowers the creation of portals, which would reside on a company's Intranet or extranet, for individuals, divisions or the enterprise. A business might make one portal for getting out sales or human resource information, while smaller groups might create portals for collaborating on projects. Individual employees might create personal Web sites viewable by anyone or private ones with information on salary and vacation time.
"SharePoint has its greatest utility for people that need to access corporate information from their desktops," DeGroot said.
The underlying collaboration technology used in the portal software also is used in Office XP and 2003. In March, Microsoft changed the name of its collaboration technology tofrom SharePoint Team Services. But the two SharePoint names--portal and services--has caused some confusion.
"SharePoint Portal is a product. SharePoint Services is a technology," Ryan said.
SharePoint Services is heavily integrated into Office 2003. People working on a project might use the technology to create a shared document work space for collaboration purposes. All members of the shared work space would be able to annotate or change the documents, with the full history of changes available to all participants. SharePoint Portal Server uses the technology to create similar collaboration work spaces on personal or division Web pages.
DeGroot speculated that because of SharePoint Services and the number of client features, many tied directly to Office, "Microsoft said why not let the Office client team handle the product."
Blurring the lines
But the change raises questions about other Microsoft server products where there is significant overlap between client and server software.
"What about Exchange?" DeGroot asked. "What do people use it for other than Outlook?"
In fact, Microsoft is expected to release Exchange Server 2003 concurrently with Office 2003, because of the timing of Outlook's availability. Like SharePoint Portal Server, a testing version of Exchange Server 2003 ships in the Office 2003 Beta 2 kit Microsoft is distributing to about a half-million businesses and individuals.
The blurring between Microsoft's client and server products may only increase as the company pushes further into the enterprise, say analysts. Microsoft's drive to deliver support for Extensible Markup Language (XML) in Office System could contribute to this blurring. Many companies, including Microsoft, are using XML to deliver Web services.
"While there are nice features that are part of the traditional Office productivity tools, the real story behind Office 11 is XML integration and the ability to share and collaborate beyond mainstream office communication," Jupiter's Gartenberg said. Office 11 is the codename of Office 2003.
The Yankee-Sunbelt survey of technology managers found that 10 percent of early adopters of Office System were making the switch "as a forerunner to Web Services," said Yankee analyst Laura DiDio. | OPCFW_CODE |
Tend Not To Fearful of Bigger Snakes! You may be scared once you see greater snakes close to you but you will have plenty of advantage when you are going to keep the range between them. It is possible to transfer swiftly since you are more compact and get away using their goes. Just be sure to not caged together.
Collect the rest of the of Eliminated Snakes! Many snakes are eliminated every single minute in Slither.io hack and they also leave fantastic tresure behind them. Make sure you keep chasing after them. When you are prepared to possess a nickname with emojis and other great items that the truth is on other snakes, then all you will need to do is read through the remainder of our post which we have ready for you. While you possibly already know, slither.io adventure is one of the most delivered electronically and enjoyed video games in both AppStore and Google Perform. Should you be ready to perform slither.io together with your buddies, acknowledge them immediately with their nickname or desire to be recognized by your nickname you can use customized words and unique symbols.
The points you need to do for any distinctive nickname is very easy. All you will need to do is backup one or a lot of following symbols and mixture it towards the nickname part on Slither.io on your own browser. You can even allow us know which nickname you want with distinctive designs by leaving us a opinion in order that we are able to put together it for you. This straightforward, however challenging adventure requires the fundamental snake adventure you almost certainly remember very best from the 90s Nokia phone and turns it into a free-roaming online multiplayer practical experience that you just have to consider for yourself.
Slither.io is a free download in both the Application Store and Google Perform Store. Get straight down with the essentials. Slither.io requires some getting used to, with regards to equally manages and technique. Here are some quick ideas if you’re unfamiliar with the adventure. There are two approaches to control your snake’s path: with one finger or two. To control your snake with one finger, just tap and hold anywhere close to your snake and this will brain in that path. Pull your finger across the border of the display screen to smoothly control your snake’s path. Alternatively, tap anyplace on the display screen along with your snake will quickly convert and brain in that path. This operates equally well – if not much better – using a stylus (paging Samsung Galaxy Be aware consumers).
To control your snake with two fingertips, hold the phone in both hands and tap backwards and forwards together with your thumbs to manage your snake in an appropriately snake-like trend. Faucet over your snake to increase and tap listed below your snake to go straight down. The advantages of two finger control is the ability to make quick turns to attack or counter in opposition to other snakes. It also helps you accumulate more orbs when you’re more compact and traveling with an orb-abundant area.
In any event you want to listen to it, you make use of increase exactly the same way: increase-tap and hold inside the path you would like to increase. Just take into account that boosting spends orb vitality and as a consequence can make your snake shorter, so use it moderately. When you use increase, you lay out a course of orbs behind you. Following orb hiking trails is a great idea for several reasons. For starters, it’s a reasonably quick approach to finding a steady flow of orbs when you’re only starting out, and it may be useful for finding other snakes. After the tail-finish of any giant snake? It’s only dependent on time prior to they begin spewing out orbs, in addition you’re within a prime placement should they go down.
Whether you’ve privately used straight down a huge snake or just stumbled across a goldmine of orbs, you shouldn’t just glide straight through all of it, because that simply leaves you vulnerable to being cut off from a person coming the other way. Your best technique is to use your snake’s physique to make a barrier across the orbs as quickly as it is possible to, then loop back again and accumulate the orbs. As you possibly won’t get as much orbs as you would in the event you just increased right through, this can be a safer technique that might also attract foolhardy boosters right into you, which means more orbs for you. While you come to be familiar with Slither io mod, you’ll realize that a number of the shining orbs got some goes. They’ll do their very best to operate far from you, so you’ll want to use your increase to catch them. The benefit is the fact that they’re worth much more than the regular orb, therefore if you’re only starting out, you’ll obtain apparent progress once you snag one. Nonetheless, from the same reason, if you see one coming correct at you, it’s likely that there’s a snake chasing after it the right path. You could be within a prime placement to ambush an naive player.
Be opportunistic close to huge snakes. When you’re a tiny snake (say below 1,000) and you also stumble across a tremendous snake plugging way, you may want to stick close to it for some time – for several reasons, actually. | OPCFW_CODE |
/*!
This module contains data structures and methods for interacting with selectable
encryption algorithms.
*/
// In this case, this lint results in harder to read code for security critical portions
#![allow(clippy::match_same_arms)]
// We are going to be allowing unused imports and unused variables a lot in this module, to make the
// code a bit cleaner. We write the code assuming that the user will compile with at least one
// encryption method (this is an encrypting archiver after all)
mod aes_shim;
#[cfg(feature = "chacha20")]
use chacha20::ChaCha20;
use rand::prelude::*;
#[allow(unused_imports)]
use serde::{Deserialize, Serialize};
#[allow(unused_imports)]
use std::cmp;
#[allow(unused_imports)]
use stream_cipher::generic_array::GenericArray;
#[allow(unused_imports)]
use stream_cipher::{NewStreamCipher, SyncStreamCipher};
use thiserror::Error;
#[allow(unused_imports)]
use zeroize::Zeroize;
#[cfg(feature = "aes-family")]
use crate::repository::Key;
/// Error describing things that can go wrong with encryption/decryption
#[derive(Error, Debug)]
#[allow(clippy::empty_enum)]
pub enum EncryptionError {}
type Result<T> = std::result::Result<T, EncryptionError>;
/// Tag for the encryption algorthim and IV used by a particular chunk
#[derive(Copy, Clone, Serialize, Deserialize, Debug, PartialEq, Eq, Hash)]
pub enum Encryption {
NoEncryption,
AES256CTR { iv: [u8; 16] },
ChaCha20 { iv: [u8; 12] },
}
impl Encryption {
/// Creates a new `AES256CTR` with a random securely generated IV
pub fn new_aes256ctr() -> Encryption {
let mut iv: [u8; 16] = [0; 16];
thread_rng().fill_bytes(&mut iv);
Encryption::AES256CTR { iv }
}
/// Creates a new `ChaCha20` with a random securely generated IV
pub fn new_chacha20() -> Encryption {
let mut iv: [u8; 12] = [0; 12];
thread_rng().fill_bytes(&mut iv);
Encryption::ChaCha20 { iv }
}
/// Returns the key length of this encryption method in bytes
///
/// `NoEncryption` has a key length of 16 bytes, as some things rely on a non-zero key
/// length.
pub fn key_length(&self) -> usize {
match self {
Encryption::NoEncryption => 16,
Encryption::AES256CTR { .. } => 32,
Encryption::ChaCha20 { .. } => 32,
}
}
/// Encrypts a bytestring using the algrothim specified in the tag, and the
/// given key.
///
/// Still requires a key in the event of no encryption, but it does not read this
/// key, so any value can be used. Will pad key with zeros if it is too short
///
/// # Panics
///
/// Will panic if the user selects an encryption algorithm for which support has not
/// been compiled in, or if encryption otherwise fails.
pub fn encrypt(&mut self, data: &[u8], key: &Key) -> Vec<u8> {
self.encrypt_bytes(data, key.key())
}
/// Internal method that does the actual encryption, please use the encrypt method
/// to avoid key confusion
///
/// # Panics:
///
/// Panics if the user selects an encryption algorithm that support was not compiled
/// in for.
#[allow(unused_variables)]
pub fn encrypt_bytes(&mut self, data: &[u8], key: &[u8]) -> Vec<u8> {
*self = self.new_iv();
match self {
Encryption::NoEncryption => data.to_vec(),
Encryption::AES256CTR { iv } => {
cfg_if::cfg_if! {
if #[cfg(feature = "aes-family")] {
aes_shim::aes_256_ctr(data, key, &iv[..])
} else {
unimplemented!("Asuran has not been compiled with AES-CTR Support")
}
}
}
Encryption::ChaCha20 { iv } => {
cfg_if::cfg_if! {
if #[cfg(feature = "chacha20")] {
let mut proper_key: [u8; 32] = [0; 32];
proper_key[..cmp::min(key.len(), 32)]
.clone_from_slice(&key[..cmp::min(key.len(), 32)]);
let key = GenericArray::from_slice(&key);
let iv = GenericArray::from_slice(&iv[..]);
let mut encryptor = ChaCha20::new(&key, &iv);
let mut final_result = data.to_vec();
encryptor.apply_keystream(&mut final_result);
proper_key.zeroize();
final_result
} else {
unimplemented!("Asuran has not been compiled with ChaCha20 support")
}
}
}
}
}
/// Decrypts a bytestring with the given key
///
/// Still requires a key in the event of no encryption, but it does not read this
/// key, so any value can be used. Will pad key with zeros if it is too short.
///
/// # Errors
///
/// Will return `Err` if decryption fails
///
/// # Panics
///
/// Panics if the user selects an encryption method for which support has not been
/// compiled in.
pub fn decrypt(&self, data: &[u8], key: &Key) -> Result<Vec<u8>> {
self.decrypt_bytes(data, key.key())
}
#[allow(unused_variables)]
pub fn decrypt_bytes(&self, data: &[u8], key: &[u8]) -> Result<Vec<u8>> {
match self {
Encryption::NoEncryption => Ok(data.to_vec()),
Encryption::AES256CTR { iv } => {
cfg_if::cfg_if! {
if #[cfg(feature = "aes-family")] {
Ok(aes_shim::aes_256_ctr(data, key, &iv[..]))
} else {
unimplemented!("Asuran has not been compiled with AES support")
}
}
}
Encryption::ChaCha20 { iv } => {
cfg_if::cfg_if! {
if #[cfg(feature = "chacha20")] {
let mut proper_key: [u8; 32] = [0; 32];
proper_key[..cmp::min(key.len(), 32)]
.clone_from_slice(&key[..cmp::min(key.len(), 32)]);
let key = GenericArray::from_slice(&key);
let iv = GenericArray::from_slice(&iv[..]);
let mut decryptor = ChaCha20::new(&key, &iv);
let mut final_result = data.to_vec();
decryptor.apply_keystream(&mut final_result);
proper_key.zeroize();
Ok(final_result)
} else {
unimplemented!("Asuran has not been compiled with ChaCha20 support")
}
}
}
}
}
/// Conviencence function to get a new tag from an old one, specifying the
/// same algorithim, but with a new, securely generated IV
pub fn new_iv(self) -> Encryption {
match self {
Encryption::NoEncryption => Encryption::NoEncryption,
Encryption::AES256CTR { .. } => Encryption::new_aes256ctr(),
Encryption::ChaCha20 { .. } => Encryption::new_chacha20(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::str;
fn test_encryption(mut enc: Encryption) {
let mut key: [u8; 32] = [0; 32];
thread_rng().fill_bytes(&mut key);
let data_string =
"The quick brown fox jumps over the lazy dog. Jackdaws love my big sphinx of quartz.";
let encrypted_string = enc.encrypt_bytes(data_string.as_bytes(), &key);
let decrypted_bytes = enc.decrypt_bytes(&encrypted_string, &key).unwrap();
let decrypted_string = str::from_utf8(&decrypted_bytes).unwrap();
println!("Input string: {}", data_string);
println!("Input bytes: \n{:X?}", data_string.as_bytes());
println!("Encrypted bytes: \n{:X?}", encrypted_string);
println!("Decrypted bytes: \n{:X?}", decrypted_bytes);
println!("Decrypted string: {}", decrypted_string);
assert_eq!(data_string, decrypted_string);
}
#[test]
fn test_chacha20() {
let enc = Encryption::new_chacha20();
test_encryption(enc);
}
#[test]
fn test_aes256ctr() {
let enc = Encryption::new_aes256ctr();
test_encryption(enc);
}
}
| STACK_EDU |
How to access the Wine directory from Dash
Before Unity I could go to the main menu and navigate to the "C:" folder when Wine has all the programs. Using Unity and the Dash how can I get to the wine directory quickly since right now what I am doing is going to Nautilus, pressing CTRL+L and addint .wine to the location bar so it looks something like /home/cyrex/.wine and then clicking on the drive_c there to enter the real directory I want.
Is there a way to do it?
Well it might look like I was not even paying attention to what I was asking here but it just came to me to type drive in the dash and what do you know, look at this:
The Browse C: Drive option appears and you can even drag it to the launcher. So it not only does work with Dash but you can also add it to the launcher to speed things up when I want to go inside the C: drive. Love Unity more and more.
Typing in simply "c" points you first to Browse C: Drive.
@Satchit - Can't get any more smaller than that.
Edit /usr/share/applications/nautilus-home.desktop
For syntax see - http://www.omgubuntu.co.uk/2011/04/how-to-add-folder-quicklists-to-the-home-launcher-in-ubuntu-unity/
I do not mean the launcher, I mean the DASH search tool. So when I look for the wine directory it shows me some kind of access for it. For example if I type wine it shows winecfg, winetricks and wine uninstall. But how to add a way to show "Drive C" or something to it that points to the wine directory directly.
There is no real good way yet. (bugs have been filed about no wine specific choice in Dash > Files and Folders
If you open Dash > click on Files & Folders icon you should be able to search "drive" & have drive_c show up in the results.
To do so you need to do 2 things. First - no folder will show unless you've opened a file directly in that folder.
By default drive_c has no files in it, so go to drive_c, create an empty file & open it in a text editor.
2nd - when searching in 'Find files' (Files & Folders), you need to "Filter results" > Type > Folders
If you do both of the above then drive_c should show after dr in the search box.
Not really much of a solution...
An alternate -
Go to drive_c, create a very uniquely named text file & open it in a text editor.
Then you could simply open Dash > find files, search it & D&D the file from the dash on to the home folder icon in unity launcher. A nautilus window will open @ drive_c
To enable this to work correctly in 11.10 you need the Exec= line in nautilus-home.desktop to end in %U as in
Exec=nautilus %U
| STACK_EXCHANGE |
Histogram plot in R
I am looking for some guiding regarding histogram plot.
Lets assume I have this vecotr (called CF)
[,1]
[1,] 2275.351
[2,] 2269.562
[3,] 1925.700
[4,] 1904.195
[5,] 1974.039
I use the following formula to plot this vector in a histogram plot.
hist(CF)
Let us now assume I have 10 000 simulated value estimates for a property. I want to plot those in a histogram (or similar plots) where the x-axis returns the probabilities.
Such plot will give med the opportunity to state something like: "with 55% probability, the value of the property exceeds $15 million.
Suggerstions?
What you probably want is the cumulative distribution function (CDF). It has probability on the y-axis (not x, as you asked), but since this is the standard way to represent the information that you want, it is best to use this curve.
As an example, I produced 10'000 values with a standard normal distribution and then constructed the CDF:
CF <- rnorm(10000)
breaks <- seq(-4,4,0.5)
CDF <- sapply(breaks,function(b) sum(CF<=b)/length(CF))
plot(breaks,CDF,type="l")
From the plot, you can for instance read off that with probability of 50%, a value below zero has been drawn.
If you prefer a bar plot, you can plot with
barplot(CDF,names.arg=breaks)
I don't know your data in detail, so I can not give you more precise code. But basically, you will have to pick a reasonable set of breaks, and then apply the code above.
Thanks.
Do you have a code I can use to get the exact probabilities?
Ex: Probability that CF lies below 1.2?
You can't get the exact probabilities in this way because you are working with empirical data. But you can estimate that probability from your data. This is what is done in the calculation of CDF (which should be ECDF...). So you estimate the probability that CF lies below b by sum(CF<=b)/length(CF).
I agree with @Stibu that you want the CDF. When you are talking about a set of realized data, we refer to this as the empirical cumulative distribution function (ECDF). In R, the basic function call for this is ?ecdf:
CF <- read.table(text="[1,] 2275.351
[2,] 2269.562
[3,] 1925.700
[4,] 1904.195
[5,] 1974.039", header=F)
CF <- as.vector(CF[,-1])
CF # [1] 2275.351 2269.562 1925.700 1904.195 1974.039
windows()
plot(ecdf(CF))
If you are willing to download the fitdistrplus package, there are a lot of fancy versions you can play with:
library(fitdistrplus)
windows()
plotdist(CF)
fdn <- fitdist(CF, "norm")
fdw <- fitdist(CF, "weibull")
summary(fdw)
# Fitting of the distribution ' weibull ' by maximum likelihood
# Parameters :
# estimate Std. Error
# shape 13.59732 4.833605
# scale 2149.24253 74.958140
# Loglikelihood: -32.89089 AIC: 69.78178 BIC: 69.00065
# Correlation matrix:
# shape scale
# shape 1.0000000 0.3328979
# scale 0.3328979 1.0000000
windows()
plot(fdn)
windows()
cdfcomp(list(fdn,fdw), legendtext=c("Normal","Weibull"), lwd=2)
| STACK_EXCHANGE |
package com.esotericsoftware.kryo;
import java.io.FileNotFoundException;
import com.esotericsoftware.kryo.serializers.CompatibleFieldSerializer;
/** @author Nathan Sweet <misc@n4te.com> */
public class CompatibleFieldSerializerTest extends KryoTestCase {
{
supportsCopy = true;
}
public void testCompatibleFieldSerializer () throws FileNotFoundException {
TestClass object1 = new TestClass();
object1.child = new TestClass();
object1.other = new AnotherClass();
object1.other.value = "meow";
kryo.setDefaultSerializer(CompatibleFieldSerializer.class);
kryo.register(TestClass.class);
kryo.register(AnotherClass.class);
roundTrip(100, 100, object1);
}
public void testAddedField () throws FileNotFoundException {
TestClass object1 = new TestClass();
object1.child = new TestClass();
object1.other = new AnotherClass();
object1.other.value = "meow";
CompatibleFieldSerializer serializer = new CompatibleFieldSerializer(kryo, TestClass.class);
serializer.removeField("text");
kryo.register(TestClass.class, serializer);
kryo.register(AnotherClass.class, new CompatibleFieldSerializer(kryo, AnotherClass.class));
roundTrip(74, 74, object1);
kryo.register(TestClass.class, new CompatibleFieldSerializer(kryo, TestClass.class));
Object object2 = kryo.readClassAndObject(input);
assertEquals(object1, object2);
}
public void testRemovedField () throws FileNotFoundException {
TestClass object1 = new TestClass();
object1.child = new TestClass();
kryo.register(TestClass.class, new CompatibleFieldSerializer(kryo, TestClass.class));
roundTrip(88, 88, object1);
CompatibleFieldSerializer serializer = new CompatibleFieldSerializer(kryo, TestClass.class);
serializer.removeField("text");
kryo.register(TestClass.class, serializer);
Object object2 = kryo.readClassAndObject(input);
assertEquals(object1, object2);
}
static public class TestClass {
public String text = "something";
public int moo = 120;
public long moo2 = 1234120;
public TestClass child;
public int zzz = 123;
public AnotherClass other;
public boolean equals (Object obj) {
if (this == obj) return true;
if (obj == null) return false;
if (getClass() != obj.getClass()) return false;
TestClass other = (TestClass)obj;
if (child == null) {
if (other.child != null) return false;
} else if (!child.equals(other.child)) return false;
if (moo != other.moo) return false;
if (moo2 != other.moo2) return false;
if (text == null) {
if (other.text != null) return false;
} else if (!text.equals(other.text)) return false;
if (zzz != other.zzz) return false;
return true;
}
}
static public class AnotherClass {
String value;
}
}
| STACK_EDU |
GDB lets you run and debug multiple programs in a single session. In addition, GDB on some systems may let you run several programs simultaneously (otherwise you have to exit from one before starting another). On some systems GDB may even let you debug several programs simultaneously on different remote systems. In the most general case, you can have multiple threads of execution in each of multiple processes, launched from multiple executables, running on different machines.
GDB represents the state of each program execution with an object called an inferior. An inferior typically corresponds to a process, but is more general and applies also to targets that do not have processes. Inferiors may be created before a process runs, and may be retained after a process exits. Inferiors have unique identifiers that are different from process ids. Usually each inferior will also have its own distinct address space, although some embedded targets may have several inferiors running in different parts of a single address space. Each inferior may in turn have multiple threads running in it.
info inferiors and
info connections, which will be
introduced below, accept a space-separated ID list as their argument
specifying one or more elements on which to operate. A list element can be
either a single non-negative number, like ‘5’, or an ascending range of
such numbers, like ‘5-7’. A list can consist of any combination of such
elements, even duplicates or overlapping ranges are valid. E.g.
‘1 4-6 5 4-4’ or ‘1 2 4-7’.
To find out what inferiors exist at any moment, use
Print a list of all inferiors currently being managed by GDB. By default all inferiors are printed, but the ID list id… can be used to limit the display to just the requested inferiors.
GDB displays for each inferior (in this order):
An asterisk ‘*’ preceding the GDB inferior number indicates the current inferior.
(gdb) info inferiors Num Description Connection Executable * 1 process 3401 1 (native) goodbye 2 process 2307 2 (extended-remote host:10000) hello
To get information about the current inferior, use
Shows information about the current inferior.
(gdb) inferior [Current inferior is 1 [process 3401] (helloworld)]
To find out what open target connections exist at any moment, use
Print a list of all open target connections currently being managed by GDB. By default all connections are printed, but the ID list id… can be used to limit the display to just the requested connections.
GDB displays for each connection (in this order):
An asterisk ‘*’ preceding the connection number indicates the connection of the current inferior.
(gdb) info connections Num What Description * 1 extended-remote host:10000 Extended remote serial target in gdb-specific protocol 2 native Native process 3 core Local core dump file
To switch focus between inferiors, use the
Make inferior number infno the current inferior. The argument infno is the inferior number assigned by GDB, as shown in the first field of the ‘info inferiors’ display.
The debugger convenience variable ‘$_inferior’ contains the number of the current inferior. You may find this useful in writing breakpoint conditional expressions, command scripts, and so forth. See Convenience Variables, for general information on convenience variables.
You can get multiple executables into a debugging session via the
clone-inferior commands. On some
systems GDB can add inferiors to the debug session
automatically by following calls to
remove inferiors from the debugging session use the
add-inferior [ -copies n ] [ -exec executable ] [-no-connection ]
Adds n inferiors to be run using executable as the
executable; n defaults to 1. If no executable is specified,
the inferiors begins empty, with no program. You can still assign or
change the program assigned to the inferior at any time by using the
file command with the executable name as its argument.
By default, the new inferior begins connected to the same target
connection as the current inferior. For example, if the current
inferior was connected to
then the new inferior will be connected to the same
instance. The ‘-no-connection’ option starts the new inferior
with no connection yet. You can then for example use the
remote command to connect to some other
run to spawn a local program, etc.
clone-inferior [ -copies n ] [ infno ]
Adds n inferiors ready to execute the same program as inferior
infno; n defaults to 1, and infno defaults to the
number of the current inferior. This command copies the values of the
args, inferior-tty and cwd properties from the
current inferior to the new one. It also propagates changes the user
made to environment variables using the
set environment and
unset environment commands. This is a convenient command
when you want to run another instance of the inferior you are debugging.
(gdb) info inferiors Num Description Connection Executable * 1 process 29964 1 (native) helloworld (gdb) clone-inferior Added inferior 2. 1 inferiors added. (gdb) info inferiors Num Description Connection Executable * 1 process 29964 1 (native) helloworld 2 <null> 1 (native) helloworld
You can now simply switch focus to inferior 2 and run it.
Removes the inferior or inferiors infno…. It is not
possible to remove an inferior that is running with this command. For
those, use the
detach command first.
To quit debugging one of the running inferiors that is not the current
inferior, you can either detach from it by using the
detach inferior command (allowing it to run independently), or kill it
kill inferiors command:
detach inferior infno…
Detach from the inferior or inferiors identified by GDB
inferior number(s) infno…. Note that the inferior’s entry
still stays on the list of inferiors shown by
but its Description will show ‘<null>’.
kill inferiors infno…
Kill the inferior or inferiors identified by GDB inferior
number(s) infno…. Note that the inferior’s entry still
stays on the list of inferiors shown by
info inferiors, but its
Description will show ‘<null>’.
After the successful completion of a command such as
kill inferiors, or after
a normal process exit, the inferior is still valid and listed with
info inferiors, ready to be restarted.
To be notified when inferiors are started or exit under GDB’s
set print inferior-events:
set print inferior-events
set print inferior-events on
set print inferior-events off
set print inferior-events command allows you to enable or
disable printing of messages when GDB notices that new
inferiors have started or that inferiors have exited or have been
detached. By default, these messages will be printed.
show print inferior-events
Show whether messages will be printed when GDB detects that inferiors have started, exited or have been detached.
Many commands will work the same with multiple programs as with a
single program: e.g.,
print myglobal will simply display the
myglobal in the current inferior.
Occasionally, when debugging GDB itself, it may be useful to
get more info about the relationship of inferiors, programs, address
spaces in a debug session. You can do that with the
maint info program-spaces command.
maint info program-spaces
Print a list of all program spaces currently being managed by GDB.
GDB displays for each program space (in this order):
An asterisk ‘*’ preceding the GDB program space number indicates the current program space.
In addition, below each program space line, GDB prints extra information that isn’t suitable to display in tabular form. For example, the list of inferiors bound to the program space.
(gdb) maint info program-spaces Id Executable Core File * 1 hello 2 goodbye Bound inferiors: ID 1 (process 21561)
Here we can see that no inferior is running the program
process 21561 is running the program
some targets, it is possible that multiple inferiors are bound to the
same program space. The most common example is that of debugging both
the parent and child processes of a
vfork call. For example,
(gdb) maint info program-spaces Id Executable Core File * 1 vfork-test Bound inferiors: ID 2 (process 18050), ID 1 (process 18045)
Here, both inferior 2 and inferior 1 are running in the same program
space as a result of inferior 1 having executed a
|• Inferior-Specific Breakpoints: | OPCFW_CODE |
How did Jost Bürgi's logarithms work?
According to what I have gathered from the internet Jost Bürgi came up with the idea of logarithms (as he called Progress Tabulen) after learning a correspondence between arithmetic and geometric sequences. We know this as the product rule for exponents: if $n \longleftrightarrow a^n$ and $m \longleftrightarrow a^m,$ then $$n+m \longleftrightarrow a^n\times a^m.$$
From what I understood, he first computed $r^n$ for $r=1.0001.$ First 15 rows of the table look as below, but should contain $\left\lceil{\log_{1.0001}(10)}\right\rceil =23028$ rows in total.
\begin{array}{|c|c|}
\hline
n & 1.0001^n \\
\hline
0 & 1.0000000000 \\
1 & 1.0001000000 \\
2 & 1.0002000100 \\
3 & 1.0003000300 \\
4 & 1.0004000600 \\
5 & 1.0005001000 \\
6 & 1.0006001500 \\
7 & 1.0007002100 \\
8 & 1.0008002801 \\
9 & 1.0009003601 \\
10 & 1.0010004501 \\
11 & 1.0011005502 \\
12 & 1.0012006602 \\
13 & 1.0013007803 \\
14 & 1.0014009104 \\
15 & 1.0015010505 \\
\hline
\end{array}
But there should be a few more steps to complete his construction. Because this table (with 23028 entries) only allows us to multiply numbers as long as their product does not exceed 10, or the sum of corresponding logarithms does not exceed 23028. Can somebody, who has studied this, summarize to me how he overcame this challenge?
D. Roegel, "Bürgi’s Progress Tabulen (1620): logarithmic tables without logarithms." Res. Report Inria-00543936, 2010. J. Waldvogel, "Jost Bürgi and the discovery of the logarithms." Elem. Math. 69 (2014) 89-117 K.M. Clark & C. Montelle, "Priority, Parallel Discovery, and Pre-eminence Napier, Bürgi and the Early History of the Logarithm Relation." Revue d'histoire des mathématiques, Vol. 18 (2012), No. 2, pp. 223-270
Products larger than 10 can be handled by reducing the exponent of 1.0001 by multiples of $23027$, which removes factors of $10$ because $1.001^{23027}=9.99999780$ is extremely close to 10.
For example:
$3 \times6=1.0001^{10987} \times 1.0001^{17918}=1.0001^{28905}=1.0001^{23027} \times 1.0001^{5878}$.
$3 \times 6=1.0001^{23027} \times 1.0001^{5878}=10.00 \times 1.800=18$.
If you look at the original tables which are quite hard to read or the reproduction of the tables by Denis Roegel, you'll see that Bürgi's tables have the "red numbers" (the logarithms) running from 0 to 230270 in steps of 10 and the "black numbers" running from 100000000 (100 million) to 999999780 (approximately one billion.) The red numbers are scaled by a factor of 10 while the black numbers are scaled by a factor of 100 million.
Repeating our calculation of $3 \times 6$ using Bürgi's table, we see that the red number corresponding to 300 million is 109870 and the red number corresponding to 600 million is 179180. Adding these two red numbers gives a sum of 289050. Reducing this sum by 230270 leaves 58780. This red number of 58780 corresponds to the black number of 179997110, which is $1.8 \times 10^{8}$.
This is essentially the same thing that we do with logarithms to the base 10, except that in that case, we reduce the exponent of $10$ by integers. e.g. $3 \times 6=10^{0.47712} \times 10^{0.77815}=10^{1.2553}=10^{1} \times 10^{0.2553}=10 \times 1.8=18.$
According to my calculations, it seems like $1.0001^{23027}$ is much closer to 10 compared to $1.0001^{23028}$. Do you know the actual number that Bürgi used?
I've updated the answer to correct the 23027 vs. 23028 error and to explain how the calculation is done with Burgi's original table.
Thank you for the detailed answer. I wish I could upvote again.
| STACK_EXCHANGE |
Yes, you can use the clip art or other illustrations on your own pages, except for the things that are noted as being copyrighted. (I have tried to preserve all the copyright and source information I have about the materials included in my web page.) However, I think it is very tacky for people to plaster all of my icons or other original artwork on their own pages without giving proper credit. If you have questions about whether you may use some particular image, please be sure to tell me the exact URL so that I know which one you are talking about. Note that many of the photos on my pages were being passed around the net years ago and their copyright status is unknown to me. As a practical matter you are unlikely to get into legal trouble using these images for non-profit purposes, as I have done.
Yes, you can include a few screen shots or other sample illustrations in published reviews of my site. If you do review the Froggy Page, I would appreciate getting a copy of your article, by the way, or at least being told where the review appears.
No, you may not mirror my frog pages, distribute them on CD-ROM, or embed them in your own web pages (as by displaying them in a subframe with your own menus and advertising banners, or by filtering the content of the pages themselves).
No, I do not accept advertising on this site. Don't waste time asking.
No, I don't accept links to other sites that aren't explicitly froggy in nature. I realize that the Froggy Page has been listed in many directories of best sites for children and several "best of the net" collections, but my policy has been not to provide reciprocal links to any of these sites.
I also can't offer you any advice on how to stop frogs from jumping into your swimming pool or croaking noisily outside your bedroom window at night. Actually, my advice would be to stop worrying and just be happy that you have frogs living near your house at all!
If you need help trying to identify a frog or want more information about a particular species than you can find from my web pages, please go to the library and look at a field guide instead of asking me. I am not a "frog scientist" in real life. I have a job that has nothing to do with frogs, and I simply don't have the time to do your research for you.
I have a large backlog of mail with requests to add links or other materials to my site. I can't promise to do anything with such requests in the future, since my time for maintaining this site is very limited.
Please do not send me pictures or other multimedia files by e-mail without asking me if I want them first. Once somebody blew away my mailbox while I was out of town by sending me several megabytes of encoded binaries that I didn't have the software to decode or view anyway.
My web pages are hosted by CWIhosting.com. I recommend them; they have especially good plans for high-bandwidth web sites.
I put these pages together for my own entertainment. While other people may also find them entertaining, educational, or useful, that is not my primary purpose or goal. I don't really care if you find things that offend you by following the links from my pages. Parents, if you are concerned about what your child is viewing on the Internet, I think it is your responsibility to supervise them.
My pages contain some links to commercial sites. I do this only because I think the material at those sites is of interest to frog fans, without implying any particular endorsement of the businesses or products. | OPCFW_CODE |
so hey there we are meeting again today so as i told in previous blog now we will discuss about the rest of the topics so lets jump into it
as we discussed in previous blog most known git commands now we are curious about the less known git commands
less known git commands :
One of the most delightful git commands is git stash. It keeps all your changes both to tracked files and in your working tree, stashing them away so that you can use them later. Git stash is temporary storage. With it, you can continue working where you left off whenever you are ready. Hence, you will have a clean working tree and can start working on something new. Also, note that git stash will never touch your untracked and ignored files.
The git rebase command is used for moving or combining a range of commits to a new base commit. In other words, it can change the basis of the present branch from one commit to another and make the branch look like it was generated from another commit. Note that even if the branch looks identical, it is made of entirely new commits.
This command is primarily used for keeping a linear project history.
The git diff command is used for comparing the changes committed in Git. This command will help you take two input data groups outputting the modifications between them. When you execute this command, it runs a diff function on the data source of Git. You can use it in compound with the git status and the git log commands.
Git reset is another powerful command which allows undoing your changes easily. This command is generally used for returning the entire working tree to the last committed state. It will discard a private branch commits or throw away the changes that have not been committed. The git reset command will also help you to unstage a file in Git.
Generally, in Git every command allows undoing some changes, but only git reset and git checkout can be used for manipulating either individual files or commits.
Git blame is simply a great tracking command. It is aimed at showing the author information of every line of your project’s latest modified file. Hence, you can use it to find the author’s name and email address, or the commit hash of the last modified source file.
The next rarely used but super-useful command is git-am. You can use it for applying a series of patches from a mailbox. It allows splitting mail messages in a mailbox onto commit log message, authorship information, and patches. Git-am applies all of them to the current branch.
Git cherry-pick is robust and not a famous command at the same time. It represents an act of picking a commit from a branch and applying it to another one. It can be related to the powerful Git tools used for undoing changes. Let’s say you have accidentally made a commit in the wrong branch. This command will let you switch to the desired branch and cherry-pick your commit to the place it should belong.
What is git branching
Branching is a feature available in most modern version control systems. Instead of copying files from directory to directory, Git stores a branch as a reference to a commit. In this sense, a branch represents the tip of a series of commits—it's not a container for commits this is like parent child relation where the child branch will inherit the parent when the work is done it will merge into parent to create a unified branch.
So the last part is commit messages
what is commit message
The commit command is used to save changes to a local repository after staging in Git. However, before you can save changes in Git, you have to tell Git which changes you want to save as you might have made tons of edits to keep the record of what are the work you have done we do commit messages.
what are good commit messages:
1.feat: The new feature you're adding to a particular application
2.fix: A bug fix
3.style: Feature and updates related to styling
4.refactor: Refactoring a specific section of the codebase
5.test: Everything related to testing
6.docs: Everything related to documentation
7.chore: Regular code maintenance.[ You can also use emojis to represent commit types]
so these are some commit messages so whenever we commit changes it is a best practice to write commit messages which helps us and others to understand.
so this is it for todays blog i hope you enjoyed well and got some grasp of good knowledge so lets meet in another blog with some information. | OPCFW_CODE |
Prove value and get stakeholder buy-in with one of our asset management applications. We customize it to your business and it includes real devices, all in a fixed-cost package.
Leverege packages hands-on support with our highly customizable asset management applications in three distinct steps so you can prove value, mitigate risk, and accelerate your digital transformation.
A Showcase gives you high value with low risk, enabling you to showcase the value of IoT to prove value and get stakeholder buy-in before investing additional time and resources.
IoT can be incredibly valuable to a business if done right, but it can also be a big resource commitment. Processes will need to be updated, employees will need to be trained, systems will need to be integrated with the IoT application, and more. Because IoT is a big commitment and because it touches many departments within an enterprise, executive buy-in is critical for success (19% of IoT projects fail because of lack of leadership support and attention). So how do you show the value of IoT to your executives? How do you help them to really “get it”, given the complexity and newness of IoT?
A quarter of IoT projects fail due to lack of clear strategy, but when you’re just starting your IoT journey, you don’t know what you don’t know so formulating a clear IoT strategy is difficult. The best way to quickly learn about IoT and clarify your IoT strategy is by actually doing. However, running a proof-of-concept isn’t free. So how do you rapidly achieve maximum value and learnings from a proof-of-concept while capping your downside and avoiding cost overruns?
If you were evaluating a pure SaaS product, like a productivity application, you could get a free trial and test the capabilities yourself. IoT applications touch the physical world, making it impossible to truly test via just the web. IoT vendors will make bold claims, but are these claims real or just marketing fluff? What’s the true accuracy of the tracking device? Does “real-time” mean updates every 5 seconds, 5 minutes, or 5 hours? How can you test and find out for yourself?
There is no one-size-fits-all solution in IoT. To be successful, you need to choose the right combination of hardware devices, network connectivity, cloud infrastructure, and application software (24% of IoT projects fail due to lack of necessary technology). Network connectivity options alone can be overwhelming with LoRa, NB-IoT, LTE-M, 5G, WiFi, Bluetooth, and ZigBee representing just a subset of viable choices. So how do you compare and choose the right IoT technologies at each layer of the tech stack?
A Showcase is one of our asset management applications customized to your business and chosen use case, with devices, connectivity, and everything else you need all wrapped together in a fixed cost package.
You get a dedicated IoT Success Manager, who will work closely with you to understand your business needs, customize the application, and provide training and operational support.
Your IoT Success Manager will customize the application to your needs by creating user roles & permissions, updating layout & branding, and configuring dashboards & visualizations.
Leverege has a catalogue of pre-integrated devices optimized for different use cases. A total of 5-10 devices will be included and shipped to you as part of the Showcase.
If you want to develop and own IoT applications, to implement internally or sell externally, don't start from scratch. Use the Leverege IoT Stack.
Automotive, manufacturing, healthcare, agriculture, supply chain/logistics, marine; whichever industry you're in, Leverege's customized asset management applications can be easily adapted to meet your business needs. IoT Changes Everything™. | OPCFW_CODE |
[$250] Approver receives whisper when report is auto-approved on their behalf
If you haven’t already, check out our contributing guidelines for onboarding and email<EMAIL_ADDRESS>to request to join our Slack channel!
Version Number: N/A
Reproducible in staging?: Y
Reproducible in production?: Y
If this was caught on HybridApp, is this reproducible on New Expensify Standalone?: N/A
If this was caught during regression testing, add the test name, ID and link from TestRail: N/A
Logs: https://stackoverflow.com/c/expensify/questions/4856
Expensify/Expensify Issue URL:
Issue reported by: @garrettmknight
Slack conversation (hyperlinked to channel name): None, yet
Action Performed:
Create a new workspace in NewDot
Invite a submitter to the workspace
Enable Workflows
In Workflows, enable Approvals
In Workflows, enable Delay Submission
Enable Rules
Enable auto-approval
Set 'Random report audit' to 0% to ensure auto-approval
As the submitter, create and submit an expense
Expected Result:
The expense that was submitted should get auto-approved without notifying the approver.
Actual Result:
The expense was auto-approved, but still notified the approver that they needed to approver via our whisper that's intended to only send when a report is approved and forwarded.
Workaround:
Yeah, but it's annoying.
Platforms:
All
Screenshots/Videos
Add any screenshot/video evidence
View all open jobs on GitHub
Upwork Automation - Do Not Edit
Upwork Job URL: https://www.upwork.com/jobs/~021865064479090764280
Upwork Job ID:<PHONE_NUMBER>090764280
Last Price Increase: 2024-12-06
Issue OwnerCurrent Issue Owner: @tgolen
@garrettmknight This should be internal 👍
Yeah, I also agreed that this should be internal and should be fixed from backend.
@zanyrenney Can you help add a Hot Pick tag for this issue? Thanks
Yeah, I also agreed that this should be internal and should be fixed from backend.
@zanyrenney Can you help add a Hot Pick tag for this issue? Thanks
@zanyrenney Friendly bump
@Beamanator @cristipaval I wanted to get your thoughts on this one, since I saw you were recently discussing this exact whisper in https://github.com/Expensify/Auth/pull/13193/files#r1846155170.
The problem for this particular bug report is that:
The Auth command for SubmitReport is called first (here) which sends the whisper
The Auth command for ApproveReport is called second (here) which auto-approves the report
Thus, there is no way for SubmitReport to know if the report has been auto-approved or not and the whisper will always get sent.
I cannot think of a very good way of fixing this, and I'm wondering if you have any ideas. About the only thing I can think of is to add some kind of check before calling SubmitReport to see if the report will get auto-approved, and if so, then pass a parameter down to SubmitReport to skip sending the whisper. This however feels like a chicken-and-egg situation because you can't know if it can be auto-approved until it is submitted.
Yeahhhh so I see your point and was thinking that could potentially work (check if the report will get auto-approved in SubmitReport) - another solution which is definitely harder would be to make SubmitReport 1:1:1 - thennnn, in Auth we would know if the report is about to get auto-approved - and if not, we send the whisper to the submitter-to approver, if SO we send the whisper to the next approver
Yeah, I briefly talked about this with Cristi last night and we had a few thoughts.
We agreed 1:1:1 would be the best option (requires a lot of work)
We could send the whisper from PHP (but this goes in the opposite direction of 1:1:1)
We could try to make the ApproveReport command find and delete the whisper that was created during SubmitReport (too much of a hack)
Nice, yeah agreed on all points haha. I wonder if anyone already started working on making SubmitReport 1:1:1?
OK, yes, it is being worked on in https://github.com/Expensify/Expensify/issues/451223. I am going to place this on HOLD until that is done, then we can come back and work on this.
If it is a problem in the meantime, I think the only thing we should consider is temporarily removing the whisper.
| GITHUB_ARCHIVE |
using System;
using System.Collections.Generic;
using System.Linq;
using Trippit.Models.ApiModels;
namespace Trippit.Models
{
public class TransitStopDetails
{
public string GtfsId { get; set; }
public string Name { get; set; }
public DateTime ForDate { get; set; }
public List<TransitLineWithoutStops> LinesThroughStop { get; set; }
public List<TransitStopTime> Stoptimes { get; set; }
public TransitStopDetails(ApiStop stop, DateTime forDate)
{
GtfsId = stop.GtfsId;
Name = stop.Name;
ForDate = forDate;
// Consolidate all the duplicates we get from the network call.
// TODO: Investigate why we get dupes at all. Is it our fault, or the server's fault?
LinesThroughStop = stop.StoptimesForServiceDate
.Where(x => x.Stoptimes.Any())
.GroupBy(x => x.Pattern.Route.GtfsId)
.Select(x => {
ApiStoptimesInPattern stoptimes = x.First();
return new TransitLineWithoutStops
{
GtfsId = stoptimes.Pattern.Route.GtfsId,
LongName = stoptimes.Pattern.Route.LongName,
ShortName = stoptimes.Pattern.Route.ShortName,
TransitMode = stoptimes.Pattern.Route.Mode
};
})
.ToList();
Stoptimes = stop.StoptimesForServiceDate.SelectMany(
x => x.Stoptimes
.Where(z => !String.IsNullOrWhiteSpace(z.StopHeadsign))
.Select(y => new TransitStopTime
{
IsRealtime = y.Realtime.Value,
RealtimeArrival = (uint)y.RealtimeArrival.Value,
RealtimeDeparture = (uint)y.RealtimeDeparture.Value,
ScheduledArrival = (uint)y.ScheduledArrival.Value,
ScheduledDeparture = (uint)y.ScheduledDeparture.Value,
StopHeadsign = y.StopHeadsign,
ViaLineShortName = x.Pattern.Route.ShortName,
ViaLineLongName = x.Pattern.Route.LongName,
ViaMode = x.Pattern.Route.Mode
}))
.OrderBy(x => x.ScheduledDeparture)
.ToList();
}
}
}
| STACK_EDU |
ASP.net Core Web API - correct swagger annotations
I am writing a Web API and have defined a controller with various GET, POST methods etc. I am using Swagger Open API for my documentation and want to understand the correct way to annotate. Here's an example of a controller method I have:
/// <summary>Download a file based on its Id.</summary>
/// <param name="id">Identity of file to download.</param>
/// <returns><see cref="MyFile" /> file content found.</returns>
[HttpGet("download/{id}")]
[ProducesResponseType(StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[SwaggerResponse(200, "Myfile content", typeof(MyFile))]
[SwaggerResponse(404, "Could not find file", typeof(MyFile))]
public async Task<IActionResult> DownloadAsync(int id)
{
const string mimeType = "application/octet-stream";
var myFile = await _dbContext.MyFiles.FindAsync(id);
// If we cannot find the mapping, return 404.
if (myFile.IsNullOrDefault())
{
return NotFound();
}
// Download using file stream.
var downloadStream = await _blobStorage.DownloadBlob(myFile.FileLocation);
return new FileStreamResult(downloadStream, mimeType) { FileDownloadName = myFile.FileName };
}
As you can see, I'm using both ProducesResponseType and SwaggerResponse to describe the download method. I'm a bit confused as to the correct attribute to use - swagger response or produces response type? Should I use both? Why would I favor one over the other?
Thanks for any pointers in advance! :)
Using both ProducesResponseType and SwaggerResponse is not necessary.
It also depends on your action declaration, for example your action returs Task<IActionResult>.
Knowing only that (without any additional attributes) a return type from that action might be anything.
So by adding: [SwaggerResponse(200, "Myfile content", typeof(MyFile))] attribute to that method, the type MyFile is known to be returned from that action and can be documented.
On the other hand you wouldn't need that attribute if you specified that in return type as follows:
[HttpGet("download/{id}")]
[ProducesResponseType(StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
public async Task<ActionResult<MyFile>> DownloadAsync(int id)
I got rid of 2 SwaggerResponse attributes and the documented return type of this action will still be the same.
I would say the less annotation the better, but of course it depends on your needs ;p
In my case I am already using "SwaggerOperation" annotation to add summary and description to the actions (and highly recommend that), that is why I went on using SwaggerResponse instead of ProducesResponseType anyway. At the end it really should not matter, you can see from source code that SwaggerResponseAttribute is derived from ProducesResponseTypeAttribute:
| STACK_EXCHANGE |
Attribute maps
Following the discussion in https://github.com/google/incremental-dom/issues/114, here's iDOM reworked to accept a map of attribute name/values.
Fixes https://github.com/google/incremental-dom/issues/115, https://github.com/google/incremental-dom/issues/23. It also opens up the possibility of exposing updateAttributes publicly, allowing Glimmer-like rendering without patching everything in the containing element.
I'd love some outside performance testing. I'm showing an improvement over master, even with the object allocations.
I created an issue, #118, to create some performance tests. The reason I have not done so far is that creating truly accurate tests for this is really hard.
In any case, for this specific CL, I'm seeing a ~30% drop in perf. I can at least add those tests somewhere but they should be taken with a giant grain of salt.
Bumping this again. I'm only seeing marginal drops in performance:
Browser
Test
Implementation
Result
Chrome
Selection
Explore
0.258ms
Chrome
Selection
Maps
0.382ms
Chrome
Creation
Explore
1.351ms
Chrome
Creation
Maps
1.575ms
FF
Selection
Explore
0.303ms
FF
Selection
Maps
0.466ms
FF
Creation
Explore
4.447ms
FF
Creation
Maps
5.107ms
For selection on Chrome, it looks like it takes 50% more time for maps vs explore. I don't think that is really marginal.
I think by splitting out core, we can allow people to make that determination for themselves though.
One thing to note is that in order to get a consistent value for the time, I filter outliers, which will filter out long runs due to GC. Capturing that sort of information in a test is pretty hard t do. There are other spikes that should be ignored due to getting swapped off the CPU for another process to run which might occur due to the fact that any test needs to hog the CPU in order to get enough samples, which is bound to cause it to get bumped off at some point.
For selection on Chrome, it looks like it takes 50% more time for maps vs explore. I don't think that is really marginal.
50% yes, but 0.124ms is maginal. We're selecting from 200 rows, meaning we're spending ~0.00062ms longer per row. Even at the slower speed, we could be updating over 8000 rows and still hitting 60fps. I think the simplified API more than makes up for that.
I think by splitting out core, we can allow people to make that determination for themselves though.
Core being DOM alignment vs DOM+attribute alignment?
One thing to note is that in order to get a consistent value for the time, I filter outliers, which will filter out long runs due to GC.
I run 4 or 5 times each, taking my perceived average. I'm not actually averaging them, but taking the number I see most often.
50% yes, but 0.124ms is maginal. We're selecting from 200 rows, meaning we're spending ~0.00062ms longer per row. Even at the slower speed, we could be updating over 8000 rows and still hitting 60fps. I think the simplified API more than makes up for that.
It might be a small difference on desktop, but you should also consider mobile. From my experience, JS execution on mobile takes roughly 8-10x as long as on laptop. You should then consider that performance is on a relatively modern flagship phone. If you consider the $30 - $50 smart phones used in emergining markets, you need to account for a larger amount of time.
So if you have 16ms for a frame and you need to set aside ~4ms for the browser layout, you effectively have only 12ms. If you assume you need something like 20x as long for a low end phone as you see on desktop, that means you only have 0.6ms total perform a diff. Now if you assume that the application itself wants to perform some logic, the amount of time the library can use is even smaller.
The tests also have a somewhat simplified DOM structure and number of attributes compared to some use cases. To compensate I bumped up the number of rows to be more than a typical application would use.
Core being DOM alignment vs DOM+attribute alignment?
Correct. There isn't really a need to force people to buy into both at the same time. I'm not sure yet how the attribute caching can be done in a good way while keeping it completely separate from core. For now, the core part would need to allocate the object holding newAttrs and attrsArr. Perhaps a creation hook can be used and the logic handling the attributes can have its own data object.
I run 4 or 5 times each, taking my perceived average. I'm not actually averaging them, but taking the number I see most often.
The code used to generate that number is doing some averaging and filtering already. So if you trigger a GC on 1/100 runs, I just drop that run. Each of the tests does a different number of runs, selection doing 400 and creation doing 200. The threshold I use for filtering out runs is if they are outside 1.5 times the 1st or 3rd quartile of runs.
Incremental DOM will not be making this change. With a little more work the current core can be pulled out into a separate repo and you would definitely be free to build this as a separate project if you are interested.
| GITHUB_ARCHIVE |
Which assembler to choose?
Choosing an assembler for assembly language programming is a critical decision that impacts your ability to work effectively with a specific assembly language.
Target Hardware Platform
- x86: For x86 assembly language programming, popular assemblers include NASM, MASM, and GAS.
- ARM: For ARM assembly language programming, popular assemblers include GNU Arm Assembler (GAS), arm-linux-gnueabihf-as, and arm-none-eabi-gcc.
- MIPS: For MIPS assembly language programming, popular assemblers include GNU Assembler (GAS), mips-linux-gnu-gcc, and mips-elf-gcc.
- Other Architectures: There are also assemblers available for other architectures, such as PowerPC, SPARC, and RISC-V.
Here are key considerations when selecting an assembler:
Ensure that the assembler is compatible with the assembly language you intend to use. Different assembly languages have their own associated assemblers, each designed for specific architectures and instruction sets. It's essential to choose an assembler that aligns with your project's target architecture.
Platform and System Support
Check if the chosen assembler is compatible with your development platform and target system. Verify that it supports the operating system and hardware you plan to work with. Some assemblers are cross-platform and versatile, while others are highly specific to certain environments.
Features and Capabilities
Assess the features and capabilities of the assembler. Consider whether it provides essential functionalities like macro support, conditional assembly, and symbolic debugging. Additionally, evaluate the level of optimization the assembler offers, as this can impact the efficiency of your compiled code.
Documentation and Community Support
Access the assembler's documentation and investigate the availability of support and resources within the programming community. A strong user community and comprehensive documentation can be invaluable when you encounter issues or need assistance.
License and Cost
Consider the licensing terms and cost associated with the assembler. Some assemblers are open-source and free, while others are commercially licensed. Ensure that your chosen assembler aligns with your project's budget and licensing requirements.
Integration with Development Tools
Determine how well the chosen assembler integrates with your preferred development environment, text editor, or integrated development environment (IDE). A seamless integration can enhance your coding experience.
Evaluate the assembler's performance, particularly if you are working on projects where optimization and efficiency are critical. Some assemblers offer better optimization options than others, which can impact the speed and efficiency of the compiled code.
Historical and Industry Use
Consider the popularity and historical use of the assembler within the industry. Widely adopted assemblers may have more extensive documentation, community support, and established best practices.
Ease of Use
Examine the user interface and ease of use of the assembler. While assembly language programming is inherently complex, an intuitive assembler can make the coding and debugging process more manageable.
Portability and Cross-Platform Support
If your project needs to run on multiple platforms, look for an assembler that offers cross-platform support, allowing you to develop code that can be assembled and executed on different systems.
- Ease of Use: Some assemblers are easier to learn and use than others.
- Command-Line Interface (CLI) or Graphical User Interface (GUI): Some assemblers have a command-line interface, while others have a graphical user interface.
- Community Support: Some assemblers have a large and active community of users, which can be helpful if you need assistance.
- Documentation: Some assemblers have more comprehensive documentation than others.
The choice of an assembler should align with the specific requirements of your project, the target architecture, and your individual preferences. Careful consideration of these factors ensures that you can work effectively with the assembly language and produce efficient code for your chosen platform. | OPCFW_CODE |
How to expose table widget to presenter in MVP pattern with gwt
In MVP pattern the widget (the view) exposes its widgets in form like this:
@Override
public HasClickHandlers getAddIssueClickHandlers() {
return addIssueButton;
}
and like:
@Override
public HasText getTaskName() {
return taskName; // taskName is a Label
}
To allow the presenter to modify the view or get the values from a widget. However, its uncertain how to get a table widget, like FlexTable or CellTable in order for the presenter to modify the table. Any ideas is much appreciated. Thanks.
Not all GWT widgets were designed with these interfaces (i.e. HasclickHandlers, HasText, IsWidget, etc) in mind.
In recent GWT versions the basic widgets were changed so that they implement these interfaces in order to make the views which use them testable in unit tests. So I am not sure if the FlexTable implements these interfaces but in case of CellTable you can use the HasData interface.
Here you can find the interfaces that are implemented by the CellTable: Javadoc
I personally would expose the CellTable via the HasData interface, which can be used to set and retrieve the selectionModel (for selecting rows in the CellTable).
For modifying or updating the data that is displayed in the CellTable, I would use a ListDataProvider and store it in the Presenter.
@Override
public HasData getCellTableDisplay() {
return cellTable;
}
and in the constructor of the presenter
you can create a ListDataProvider and use the addDataDisplay function to add the CellTable:
final ListDataProvider<String> dataProvider = new ListDataProvider<String>();
dataProvider.addDataDisplay(getView().getCellTableDisplay);
HasData is unknown class. I am using GWT 2.1.1.
According to the Javadoc HasData interface should be part of 2.1:
http://google-web-toolkit.googlecode.com/svn/javadoc/2.1/com/google/gwt/view/client/HasData.html
Yes its kinda weird, I also checked the docs already. I will try some fix.
I wondering if you're having any problem using a ListDataProvider inside your presenter. It uses Scheduler making a plain junit test fail. Do you do anything in order to substitute the default Scheduler implementation?
That's a really good question (haven't really used Junit tests so far). As an alternative you could keep the ListDataProvider in the View and have a method like setData() in your view interface. However as far as I know most people who follow the mvp approach keep the ListDataProvider in the Presenter.
Well, the more stuff is put in the view the more useless is the MVP... :P
Well MVP is only a design pattern and thus there is not one single best implementation for MVP. It really depends on the use case. I agree that the aim should be to keep the View as dumb as possible not but by any means.
I came up against the same problem and I just moved my dataProvider from my Presenter to my View to enable testing. Just wondering if you found a better solution or not?
| STACK_EXCHANGE |
I Hate the Official WoW Boards :/
I usually lurk on the Warrior boards, but can't post there since I don't play on the US servers.
The idiocy there really blows my mind.
Every time a player posts a topic asking for advice, certain people come in, completely derail the thread, and basically tell the OP to GTFO. I haven't seen a single thread where these people actually give helpful advice. Anyone who frequents those boards will know what I'm talking about.
Now, there's people with crappy gear coming in and saying "fury is fine, l2play," despite the fact that you can pull the same DPS numbers as a warrior with half the effort as another hybrid class. People who argue and have good points just get flamed until they don't post anymore.
I've always thought the whole point of the forums were to help others get better at the game... yet the WoW forums have turned into effing facebook.
More reason to stay here I guess.
well... you got it AND NOW GTFO, L2P, NOOB LOL
I know exactly what you mean, it's a sad thing nearly none tries to help or to give useful advice in there. but I also guess, none is really looking for some, while posting there.
Anyway - we got our tankspot, dont we?
the problem is the moderators over on those forums don't give a F*** about how people are getting treated. I mean some of the questions I see over there I wonder how they even posted the question on the forum if they don't know the answer to the question they asked...... but everyone starts somewhere and deserves help when they need it. So I agree 100% with you about the blizz forums.
Blizzard is extremely active in moderating topics but players have to use the report function! We get a decent amount of trolls directing comments at our guild and I've yet to see a post we've reported stay around for more than a couple hours.
Not saying it's perfect or it'll cause people to give you good answers, just saying you can be part of solving one of the major annoyances of those forums.
I stand corrected then and will try to report in the future to help the process
So what should we do when we see somebody posting "lrn2read" as part of their message here? And I'm not talking about a regular user, either.
Mhoram - I have never seen an outride rude or insulting post stand on this site for more than an hour before an administrator either harshly reprimanded the poster, closed the thread, or deleted the comment alltogether.
I have. Just look around and you'll find it. I don't want to say any more because I'm certain it will get me banned. But there is someone (whose name we all know) who's been throwing up "Lrn2read" as part of their posts. It's there if you look.
Originally Posted by Tatt
One thing I've learned moderating boards AND being class leader for another MMO...
If you give someone an inch of line to air their dirty laundry, they'll take a mile.
It's not the game they are fighting as much as it is a bunch of social psychology mumbo-jumbo. Humans are naturally competitive at all ages. Some, more then others. This game can be highly competitive in very unusual ways. Everyone wants to be harder, better, faster, stronger, and will 'work it' to make it as such (sorry for the coy DP reference.) Due to what I can only understand is an impuslive nature, they seem to believe that they have the right answers. Sometimes they do, most of the time - they don't. As such, 90% of the tripe on those boards is people speaking from the primitive portion of their brain (not to say they are primative, only that the thought process might be) that someone who doesn't feel the same raging-intense natural drive about pixels can only look on and shake their head.
That's why we have the benevolent leaders at tankspot and EJ and so fourth to ensure that at least there's some forebrain thought into posts. (THANK YOU!)
P.S. I only took psychology as an elective so this is a disclaimer that I could be (and probably am) completely wrong. However, I can say the man I learned most about human brain (mal)function was from the late George Carlin, of whom I have met only once. Briefly.
WoW has official boards? i always thought those were please troll me craig lists! | OPCFW_CODE |
In the post below we asked about which out of four features you were missing the most from Helium 11:
We have been in the works with three of these features for a while and will now start the work on the device/folder synchronization.
We want to catch as many of the use cases as possible for how you used this feature (or rather how you would like the new feature to work), which is why we post this topic.
Please let us know a little but about what features you feel are critical to the device/folder synchronization so we know that we are developing the feature in the right way.
The main case for me is loading music to my phone, to a USB stick for car and boat, and making CD's for kick-boxing workouts.
I would typically create a playlist, then synchronize the playlist with a connected folder (which could be on the local PC, on the network, or an attached USB filesystem).
My case is write-only, not really "synch". I don't have much interest in synching tags bidirectionally or other fancy features... just want to get the songs in the playlist onto the external filesystem.
It would be immensely helpful if the synch would trans-code to .mp3 or .wav, since some of my players don't do .flac.
A CD creator would be icing on the cake.
As an extreme stretch feature request (and one I have suggested before), it would be Really Cool to be able to synch a Helium database+filesystem with another. For example, I have HMM running on a home server. I would like to be able to synch the HMM database and files on my laptop to the main system, so I can carry a subset of the collection on laptop, and capture play count, edits, new songs added, etc., back to the main database.
Thanks for your input.
> just want to get the songs in the playlist onto the external filesystem.
Out of curiosity, have you tried to use the File rename feature? You should be able to:
Thank you for that suggestion... It hadn't occurred to me to try that. It might do the job!
I only use "Album" if this would be fix, it would be OK for me without any further settings.
Picture sync I don't need because pictures are in the tags and will be shown in the defices. If a picture is in the folder too, the folder will not be removed If I removed an album from the sync tool.
only the left part of the dialog would be enough for me
for output folder it would be nice if I can delet a path to a folder which I don't use anymore.
We have tried to get some feedback on this feature as this post shows. We haven't received much information or interest, so we are still reviewing if, when and how this feature will do a comeback.
Oh! I didn't know this was being asked for! I guess I don't come often enough.
Now I could be using Helium 12 as it was NOT intended, in which case, correct me.
I often get my music into digital form and place onto my server. Ideally, I'd like Helium to just pick up on this. However, I don't leave my computer running, so in actual fact, I probably want to initiate it or schedule it.
By "it", what I want is Helium to look for the new files and bring them in. Ideally, I'd love to have Helium put these into a smart playlist that I could run a batch on. The batch operations would involve important images/tags/etc and normalizing the volume.
I have over 50k tracks. In Helium 11 and previous, I found the syncing to be slow to the point that I simply added the new files manually. However, that was annoying because the new files are mixed in the folder structure with many other files.
I never want syncing with my devices. The only value that I could see there would be retaining play count or something like that.
I think your first sentence is very true! Not enough costumers are coming to this page and find this thread.
For me it's very pitty that Helium 12 is not a complete software like Helium 11 was it. I still can't deinstall Helium 11 because of missing features and performance problems of Helium 12.
>>By "it", what I want is Helium to look for the new files and bring them in. Ideally, I'd love to have Helium put these into a smart playlist that I could run a batch on. The batch operations would involve important images/tags/etc and normalizing the volume.
Since your computer is not always running, wouldn't an Update library function do what you need to identify and add the new tracks?
From there you could create a smart playlist and execute a script on the new files which add volume normalization and other steps (I'm not sure about which exact steps you need to do though).
The smart playlist could use arguments like "added date is today and does not contain gain".
To automate it more we could add functionallity to update one or more folders automatically based on a time so that you can omit the first manual step.
We already had a discussion by E-Mail around this topic in 2015. Therefor I just copy parts of my mails:
I have a Laptop setup especially for DJing and music-production. For DJing I use Traktor DJ Studio by Native Instruments. I’d like to integrate Traktor’s library with Helium. Currently I picked the tracks I wanted to add to Traktor’s library in Helium. I compiled a playlist in Helium and exported the contents of the playlist to Traktor’s library.
First of all: I have duplicates I do not really want to have.
Secondly: Traktor imports those files and writes BPM-Information and some other stuff to the files. Also sometimes I have to re-tag the files since Helium’s Tag-Information doesn’t show up (but that’s another topic).
So as a consequence I have two separate pools of files that diverge over time.
Do you have an advice how a best practice for that case could be? How could I synchronize those two pools? Perhaps there can be some special support for Traktor in a future release?
My main problem is to get things organized and to reduce redundancy.
A solution I could imagine would be some kind of synchronization toll.
As an example I have setup all my laptops with synchronization jobs. As soon as they map a drive on my nas (what they can only do if I’m at home in my local w-lan) all important data get’s backed up to the nas.
If I port this mechanism to helium: Helium could perhaps manage external “Players” like iPod, Smartphones, Laptops and so on.
Every Player has a directory with music that is “virtually” managed in helium as long as the player is offline.
As soon as the player is online (can be detected by helium) the virtual directory in helium get’s synchronized (both ways) with the physical directory on the player.I think that could be a helpful feature… | OPCFW_CODE |
Many of us know the importance of scripting out our users on a regular bases in order to be able to before DR restores more efficiently. If you are responsible for performing backup and restores or disaster recovery solutions and you do not have a routine to regularly script out your users and permissions, then you should implement one soon.
A common problem when DBA’s have to restore production user databases to another server is that none of their users exist on the new server. Depending on the level of disaster, you may not have access to a working master database on the primary server. You would then find yourself having to manually create the users or add AD users to the new server, or have to restore the master database from the old server and script the users out.
You don’t want to be in that type of situation. Microsoft provides us a nice sp_help_revlogin script that will easily script out your users. The issue with this script is that it does not include the users permissions. Luckily this is not a brand new issue and my friend Kendal Van Dyke (blog)has published a great article with the scripts on how to get your users permissions and roles. Check that out here.
A common problem that you may face if you are testing out the user create scripts is that if you try to run the script to create users on a server that does not support the complexity requirements or has a more strict password requirement, the script may fail. You may get an error message similar to “Invalid value given for parameter PASSWORD”.
A good practice would be to implement a process to run both of the scripts above to output the values to a file that you backup each night with your databases. There are plenty of documentation available on the web to help you get the output into text files. I use sqlcmd and bcp to get the output into the format that I need. I have seen others that just create a two step job and use the “Output file:” option to save the results to a file. I got more complex so that the files I create are ready to execute as part of my DR policy without requiring any modification to the results.
I hope you found this helpful and that you have a solid recovery plan that you rehearse on a regular basis. Something you should try is to get someone completely unfamiliar with SQL Server and have them execute your recovery plan. If they can not follow it completely and recover your system, update your recovery plan. Who is to say that when disaster strikes, that you are available with your institutional knowledge to bring the system up. Who is to say that your priority will be your company if disaster strikes. With the recent events in New York and New Jersey with hurricane Sandy, where would your priority reside? Trying to fail over your company to another datacenter or finding food and shelter for your wife/husband and kids? If you spent the time and have a fool proof recovery plan that has been executed and tested multiple times, you could be rest assured that the support staff in that remote datacenter could execute your recovery plan to bring your company back online. Do yourself the favor, plan for the failure and test the plan. | OPCFW_CODE |
Now Type URLs in Hindi and many other regional languages
The government of India recently launched .Bharat domain name in devnagri script. Dear friend today I discussed about new top level domain name in local language in major eight languages (Hindi, Konkani, Marathi, Maithili, Boro, Dogri, Nepali, and Sindhi-Devnagri) other languages will add soon.
If you want make a website in regional language than it is now possible with Top level domain name. Like www.example.bharat I mean your domain name in Hindi script or other listed language. Any individual, firm, businessmen, local e-commerce store or shopkeeper can book domain name.
If you don’t know about domain name than your first question is what is domain name.
What is domain name?
Domain name is a unique name for a website that identifies a unique website on internet. Computer don’t understand A-Z character it’ know only digit like 188.8.131.52.25 etc. but we can’t remember for long time. If you press a key in keyboard than computer take a unique digit for every key.
Type of domain name: Mostly two type domain name
- Top level domain name: Top level domain name is a domain ending with .com .net .org .org .in. etc. The government of India recently launched top level domain service. For example: www.example.com here this is top level domain name and by the government of India recently launched www.example.bharat for Indian local language website. You will such type website soon. .in is already abatable for Indian website.
- Sub domain: According to name sub domain connected with top level domain sub domain provided by owner of top level domain for example: www.livelatest.blogspot.com this is a sub domain name.
Benefit of Bharat domain
- Any individual, firm, or local businessmen can easily make a website in regional language with domain name.
- It covered in eight languages (Hindi, Konkani, Marathi, Maithili, Boro, Dogri Nepali, and Sindhi-Devnagri)
- Dot Bharat domain will support native language script as your language.
- India will be biggest internet user country in near feature. So a website or blog content will help farmers, students, small shopkeeper etc.
- It could also covered generate new jobs and help to spread business in local areas.
- It will make easy communication between government, business firm and regional people.
- Share your content in regional language with local people.
- It will help to peoples get right price of their goods.
Aim of Government: The government of India wants to connect 60,000 villagers with broadband this year. 1lakh next year and another 1lakh in following year through The National Optical Fiber Network (NOFN).
This project cost around 35,000 Crore which aim to provided high-speed broadband internet connection to 2.5lakh gram panchayats in India by mach 2017. | OPCFW_CODE |
First one is the page hit rank (PHR) on the day of the distribution release.
As expected Ubuntu tops the number of page hits on its release date, followed by PCLinuxOS and OpenSUSE.
The Top 10 are
- Ubuntu 7.10
- PCLinuxOS 2007
- openSUSE 10.3
- Fedora 8
- Debian GNU/Linux 4.0
- Mandriva Linux 2008
- SimplyMEPIS 6.5
- Gentoo Linux 2007
- Slackware Linux 12.0
- Sabayon Linux 3.4
The Other stats relates to the number of people using IRC channel of a distribution.
Here again Ubuntu is way ahead of others. However, there is a huge deviation from PHR and we have Gentoo on second place followed by debian.
The Top 10 in this list are
- Gentoo Linux
- Debian GNU/Linux
- Arch Linux
- Slackware Linux
Mandriva occupies 12 place and the current Distrowatch PHR topper PCLinuxOS is at 18th position.
Key thing to note is that Ubuntu had 1,240 users on IRC where as PCLOS had just 32.
The IRC stats collector Marijn Schouten concluded:
"I think that IRC statistics are more representative of current number of users while PHR is more representative of the number of people that are not actually using a distro, but are merely curious as to what sets it apart from the others. Such interest is more fickle than being an actual user of a distro. Therefore I think IRC rank is more representative of the actual size of the community around a distro, which I think is the relevant measure to rank distros by."
Now this does not go well with me. IRC is more for technical users; most of my friends using Linux never log on to any IRC channel. If they have any issue, they Google for it or look at the distribution Forums. My claim to technical users using IRC is further supported by the fact that "geeky" distributions ( Gentoo, Debian, Arch, Slackware and CentOS) and operating system ( FreeBSD) constitute majority of the top 10 list. Not sure how many average users use either of the above listed ones as their desktop operating system. I was a great fan of Arch Linux on desktop, untill I found PCLOS and gave in to ease of use, however, I do not consider myself as average user.
Ubuntu is a distribution used equally by geeks as well as average user. We see too many software being developed on top of Ubuntu, mac4Lin being a very good example. Hence, I assume that those developers contribute to high number of users of Ubuntu in IRC.
If you have a different opinion, please share it.
Blogged with Flock | OPCFW_CODE |
Bitcoin related services managers
AWS Plastic was first came at re: Recap inand was made there anomalous in Traditional ETL is a key bitcoin related service managers in stating data analytics, since many cleansing and reformatting is almost always important when constructing everything from publications marts, warehouses, data bitcoin related services managers, machine learning communications, metrics dashboards, smelling reports, and many other extract specific projects.
Data debugging options can be very high-consuming from an extra and pay every. When a very few bitcoin related service managers emerges, implementing the different bitcoin related service managers compute and information resources can find significant delays.
Our website constitutes to pursue a new security using to cryptocurrency clearing, and your IT sandbag gets asked to do a database with traditional dns information for bitcoin, to opportunistic other digital processes. Little than require a microsoft upfront investment in future the database and the very tooling for additional processor and model alpha, you can leverage AWS Bromide and unapologetic latency throughput tools like Apache MessageSeven and Crypto Currency to enable terrifically experimentation and further run energy.
Transfer, you do to wait the operationalization of this news analysis platform as little as expanding, and AWS CloudFormation models with this automation. Scam with getting historical bitcoin user support. You can buy to use the people everywhere from being traitors, or you can find an illustrating dataset that includes endogenous data.
Double are many sources of countrywide data centers for such attacks, as did in 18 years to find comment sets for data collection projects. The dataset from the Bitcoin Reversible Data page will fit our highly see Disclaimer 1 below ; it intends historical data from Physical to today, from several nodes.
Controlled with this photos, now get the pool to a significant that you can use it, degradation it, and crypto for further ar and experiments. You set up everything with CloudFormation lowly in the world, so far operationalizing the local in a decade environment becomes fast and repeatable. The button helps you:. Except creating the transaction, which should take about 40 graphics, check the AWS Food console where your database and compression show up on the amazing lists immediately. Horror the crawler runs in about 5 bitcoin related services managers, per your fantastic cron expressionthe idea also informs up.
Praise for the coinbase acquired pricing discrepancy, created with the AWS Tea crawler. Telephone query results from the coinbase used standard deviation, using the Best Query Editor.
You can also gain it with other users, reformat the price to bitcoin related service managers it easier to consume, and further discuss solution experiments. For pete, maybe you would to keep the economics with other goods or services, but those events trade enough structure to payment tables from them and bitcoin related service managers them with your existing customers.
Using AWS Leather classifiersyou can use grok diagrams to add structure to these vulnerable data dependencies. Another laura that prevents the current data ism would be explaining the UNIX time normal field to a reliable rating field. AWS Tea allows for the bitcoin related service managers of jobs that can do such phenomena on bitcoin related services managers. It also supports you to use PySpark Vigilante-based Spark seats to do such systems. You can now get those scripts as loans in AWS Glue, and have them run quietly or based on a bitcoin related service managers, perhaps to get shoehorned data.
Further, you can opt machines and conduct experiments using Crypto Zeppelin notebooks, ledgering a century with the Fact software ready to use. For more information about new up your wonderful IDE, see Historical: Visit the CloudFormation echoes page and CloudFormation squalor for more information, as well as the full list of supported resources.
He bitcoin related service managers with us and internal development events to increase on and improve the digital asset for CloudFormation receptacles. In his hashing time, he works progressive trance enlightenment.
Analyzing historical development data for bitcoin: Starvation Their company wants to blame a new language relating to cryptocurrency trading, and your IT expo gets entered to building a database with systemic lupus information for bitcoin, to countless other analysis processes.
A pickaxe of the CSV debs provided for the coinbase executive. Suite the requirements as a zip identity from Bitcoin Defective Trouble.
Course the zip announcement locally and take the file for the coinbase com. Set up an Arizona S3 homeostasis and put the operator there. Create a new ruling in your personal editor. Encounter and oil the situation CloudFormation license into the code see Figure 2 below.
The groundwork extremities you: Arsenal the following from the percentage wise: The template provides the minimal facilities for setting up your database. The detonator carmen the verification of insured the data from the CSV tequila that you downloaded and affiliates the world in the database. Put your own due and cold wallet where you uploaded your investment of the coinbase transaction that you got from Kaggle in the mined steps.
For this problem, the template accidents up a system for the polar to run every five provinces on weekdays. For this fragile example, it does sense to put everything in one day, with the crypto of overseas deleting all resources by nature the stack.
For a particularly-term ripple, you may find to emerge these entities in separate templates, as you only spending to create the most and database one trillion. To exchange at the people from the other bitcoin transactions related in the Kaggle dataset, secret the rate code in separate industries and stacks for biflyer, coincheck, and bit hopeful, which are also extensive in the same dataset..
Hard hire - 1000 Kc Slozitejsi zasah, prehrani bitcoin related service managers, vice jak 5ks Respectfully case. Cena za 1kWh je prepoctena a je v ni zapocten napriklad najem prostor, chlazeni, ventilace, filtrace vzduchu, internetove pripojeni a datove rozvody, zabezpeceni, pojisteni apod.
What bitcoin related services managers are used on coinwarz. com (I mechanic 2 underlying these that did not obligated up to the gan mining value, GLD and EMC2). I do not only auto generated, as you end up with leading specialist in a lot of states..
The Nano S reliably manual authentication of thousands via bitcoin related service managers the right button to take them. You can receive the right as you browse for the existing with the normalization. It is also a multi-app and addresses other Cryptocurrency cameras and also other scenario GPG and SSH.. | OPCFW_CODE |
//
// CacheStorage.swift
// FlatCache
//
// Created by Robin Malhotra on 28/04/19.
// Copyright © 2019 Ryan Nystrom. All rights reserved.
//
import Foundation
import PINCache
public class PINCacheFlatCacheStorage: FlatCacheStorage {
var pinCaches: [String: PINCache] = [:]
let rootPath: String?
init(rootPath: String? = nil) {
self.rootPath = rootPath
}
public func set<T>(value: T) throws where T : Cachable {
let pinCache: PINCache
if let cache = pinCaches[value.flatCacheKey.typeName] {
pinCache = cache
} else {
let name = value.flatCacheKey.typeName
pinCache = rootPath.map{ PINCache(name: name, rootPath: $0) } ?? PINCache(name: name)
pinCaches[value.flatCacheKey.typeName] = pinCache
}
// as NSData feels * so wrong * but I guess it works
// `Data` isn't NSCodingCompliant, but NSData is 🤯
pinCache.setObject(try value.toData() as NSData, forKey: value.id)
}
public func get<T>(key: FlatCacheKey) -> T? where T : Cachable {
if let data = pinCaches[key.typeName]?.object(forKey: key.id) as? Data {
return try? T.create(from: data)
} else {
return nil
}
}
public func clear() throws {
pinCaches.forEach{ $0.value.removeAllObjects() }
}
}
| STACK_EDU |
As Microsoft continues its own foray in to the security software business, critics (mainly supporters of the existing cottage industries) have argued that Microsoft will never to be able to build antivirus, antispyware, and personal firewall tools that are as good as those that come from the third party providers that are far more focused (as a percentage of the companies' overall efforts) on malware -- companies like Symantec, McAfee, and Zone Labs (a subsidiary of Checkpoint). Meanwhile, other industry observers see Microsoft's entries as being the death knell for third party products. When I last asked long time Zone Labs executive Fred Felman for his assessment (Felman has exited the security business for now and is pursuing other opportunities), the only thing he would say on the record is that he thinks the security business "is beat" right now (as in "out of gas"). That doesn't mean it can't find some successful niches (for example, products that focus on the needs of enterprises). For those waiting to see how the rubber actually meets the road, Suzi Turner -- ZDNet's Spyware Confidential blogger -- has been conducting a series of exhaustive tests to see how well Microsoft's Windows Defender (currently in beta) holds up to other products that are designed to keep our systems spyware free. While her tests are not finished yet, the results could be proving the critics of Microsoft's strategy correct. Writes Suzi in her blog:
Windows Defender detected and removed approximately 65% to 75% of the spyware compared to SpywareDoctor and SpySweeper. Windows Defender left behind quite a few registry keys. It did better with file removal than with registry clean up.
Windows Defender is the name of Microsoft's antispyware product. It will be included for free in Windows Vista and a free download will be made available to users of Windows XP SP2. The two caveats to Suzi's conclusions so far are that Windows Defender is still in beta and that she's not done with her testing. With a product that's in beta, anything can change. In her first round of tests, Suzi basically checked to see how good Windows Defender was at removing spyware after the fact (in other words, after it was already put onto the system). Windows Defender also includes some realtime protection capabilities designed to catch spyware before it sneaks onto your system. Between WD's removal capabilities and it's real-time protection capabilities, it may very well prove to be worth it's free price. So stay tuned to Suzi's blog for her findings.
On a related note, Suzi is conducting her tests using the virtual machine technology found in VMware's VMware Workstation. In addition to the many reasons I've proposed that everyone should be using virtual machine technologies like VMware or Microsoft's Virtual PC, testing new software and Web sites is another one. If the software doesn't work or that Web site turns out to be malicious, if you run your tests in a virtual machine, then those tests cannot negatively impact the rest of your system . And speaking of malicious Web sites, Suzi found a new one today -- a Web site that poses as the provider of an antispyware tool called Spy-Shield, but that installs adware on your system. Keep away (and where are the authorities... this is fraudulent!). | OPCFW_CODE |
#include <iostream>
#include <string>
#include <cstring>
#include <sstream>
#include "date.h"
#include "cpp_date.hpp"
#include <chrono>
// I am using date.h that is a great C++ header only library based on the standard library of chrono.
// It seems that this library will be
//
// Roughly speaking time has two aspects, 1. time point and 2. period (time interval) .
//
// 1. For time points the following types can be used.
//
// * (field-based) date::year_month_day
// * (serial-based) date::sys_days
// * (serial-based) date::year, date::month, date::day (The other field information is not held.)
//
// date::year_month_day holds inforamtion year, month, day seperately.
// date::sys_days holds one information, how many days have passed from specific date (UNIX time).
//
// About calculation,
// date::sys_days is used for day level calculation.
// date::year_month_day is used for year or month level calculation.
//
// date::sys_days is appropriate for date calculation, because it's serial based.
// date::year, date::month and date::day can be used for calculation.
// From serial based types, you can obtain values using .count() method.
//
//
// 2. For time intervals
//
// * date::days
// * date::years, date::months
//
// Calculating intervals from date::sys_days, you can get date::days.
// You can also get date::days from date::day calculation.
// date::years from date::year calculation, date::months from date::month calculation.
//
// You cannot get date::days from date::year_month_day directly. Before calculation you need conversions, such as
// a. date::sys_days{ date_ymd_obj } => date::sys_days
// b. date_ymd_obj.year() => date::year
// c. date_ymd_obj.month() => date::month
// d. date_ymd_obj.day() => date::day
//
//
// 3. Constructors of these types
//
// 3.1 For date::year_month_day
//
// date::year{2019}/1/1
// date::year_month_day{ date::year{ 2019 }, date::month{ 1 } , date::day{ 1 } }
//
// 3.2 For date::sys_days
//
// date::sys_days{ date_ymd_obj };
//
//
// Private functions
date::sys_days
obtain_unix_epoch_sys_days()
{
return date::sys_days{ date::year{1970}/1/1 };
}
date::days
convert_ymd_to_unix_date(date::year_month_day ymd)
{
date::sys_days specified = date::sys_days{ ymd } ;
date::sys_days unix_base = obtain_unix_epoch_sys_days();
return (specified - unix_base);
}
date::days
convert_ymdi_to_unix_date(date::year_month_weekday ymdi)
{
date::sys_days specified = date::sys_days{ ymdi } ;
date::sys_days unix_base = obtain_unix_epoch_sys_days();
return (specified - unix_base);
}
date::days
convert_sys_days_to_unix_date( date::sys_days sd )
{
date::sys_days unix_base = obtain_unix_epoch_sys_days();
date::days result = sd - unix_base;
return result;
}
date::sys_days
convert_unix_date_to_sys_days(int unix_date)
{
date::sys_days unix_base = obtain_unix_epoch_sys_days();
date::sys_days sys_day = unix_base + date::days{ unix_date };
return sys_day ;
}
// Public functions
char*
cpp_date_new_cstr_format ( int unix_date, const char* fmt )
{
date::sys_days base_day = obtain_unix_epoch_sys_days();
date::sys_days new_day = base_day + date::days{unix_date};
std::stringstream ss;
ss << date::format( fmt, new_day ) ;
std::string str = ss.str();
const char* const_str = str.c_str();
char* new_str = (char*) malloc( (strlen(const_str) + 1) * sizeof(char) );
strcpy(new_str, const_str);
return new_str;
}
int
cpp_date_ymd( int y , int m, int d )
{
date::year_month_day ymd_obj = date::year{y}/m/d;;
date::days unix_date = convert_ymd_to_unix_date(ymd_obj);
return unix_date.count() ;
}
int
cpp_date_ym_weekday_nth( int int_y, unsigned int int_m, unsigned int int_wd , unsigned int int_nth)
{
// For wd, normal range is [0, 6], for Sunday through Saturday.
date::year y = date::year{int_y};
date::month m = date::month{int_m};
date::weekday wd = date::weekday{int_wd};
date::weekday_indexed wdi = wd[int_nth]; /* index n in the range [1, 5]. It represents the first, second, third, fourth, or fifth weekday of some month. */
date::year_month_weekday ymdi = y/m/wdi;
date::days unix_date = convert_ymdi_to_unix_date(ymdi);
return unix_date.count() ;
}
int
cpp_date_add_n_years( int unix_date , int years)
{
date::sys_days sd = convert_unix_date_to_sys_days(unix_date);
date::year_month_day ymd = date::year_month_day{ sd };
date::years y = date::years{years};
date::year_month_day new_ymd = ymd + y;
date::days new_days = convert_ymd_to_unix_date( new_ymd );
return new_days.count() ;
}
int
cpp_date_add_n_months( int unix_date , int months)
{
date::sys_days sd = convert_unix_date_to_sys_days(unix_date);
date::year_month_day ymd = date::year_month_day{ sd };
date::months m = date::months{months};
date::year_month_day new_ymd = ymd + m;
date::days new_days = convert_ymd_to_unix_date( new_ymd );
return new_days.count() ;
}
int
cpp_date_add_n_days( int unix_date , int days )
{
date::sys_days sd = convert_unix_date_to_sys_days(unix_date);
date::sys_days new_sd = sd + date::days{days};
date::days unix_day = convert_sys_days_to_unix_date( new_sd ) ;
return ( unix_day.count() );
}
| STACK_EDU |
Designing an algorithm for a given problem is a difficult intellectual exercise. This is because there is no systematic method for designing an algorithm.
Moreover, there may be more than one algorithm to solve a given problem. Writing an effective algorithm for a new problem or writing a better algorithm for an already existing algorithm is art as well as science
because it requires both, creativity and insight.
Identifying Techniques for Designing Algorithms
Although there is no systematic method for designing an algorithm, there are some well-known techniques that have proved to be quite useful in designing algorithms.
The following two techniques are commonly used for designing algorithms:
- Divide and conquer approach
- Greedy approach
Divide and Conquer Approach
The divide and conquer approach is an algorithm design technique that involves breaking down a problem recursively into subproblems until the subproblems become so small and trivial that they can be easily solved.
The solutions to the subproblems are then combined to give a solution to the original problem. Divide and conquer is a powerful approach to solving conceptually difficult problems.
It simply requires you to find a way of breaking the problem into subproblems, solving the trivial cases, and combining the solutions to the subproblems to solve the original problem.
Divide and conquer often provides a natural way to design efficient algorithms. Consider an example where you have to find the minimum value in a list of numbers. The list of numbers is as shown in the following figure.
To find the minimum value, you can divide the list into two halves, as shown in the following figure.
Again, divide each of the two lists into two halves, as shown in the following figure.
Now, there are only two elements in each list. At this stage, compare the two elements in each list to find the minimum of the two. The minimum value from each of the four lists is shown in the following figure.
Again, compare the first two minimum values to determine their minimum. Also, compare the last two minimum values to determine their minimum.
The two minimum values thus obtained are shown in the following figure.
Again, compare the two final minimum values to obtain the overall minimum value, which is 1 in the preceding example.
The greedy approach is an algorithm design technique that selects the best possible option at a given time. Algorithms based on the greedy approach are used for solving optimization problems where you need to maximize profits or minimize costs under a given set of conditions.
Some examples of optimization problems are:
1. Finding the shortest distance from an originating city to a set of destination cities, given the distances between the pairs of cities.
2. Finding the minimum number of currency notes required for an amount, where an arbitrary number of notes for each denomination is available.
3. Selecting items with maximum value from a given set of items, where the total weight of the selected items cannot exceed a given value.
Consider an example where you have to fill a bag of capacity 10 kg by selecting items, (from a set of items) whose weights and values are given in the following table.
A greedy algorithm acts greedy, and therefore, selects the item with the maximum total value at each stage. Therefore, first of all, the item, C, with a total value of $800 and a weight of 4 kg will be selected. Next, the item, E, with a total value of $500 and weight of 5 kg will be selected.
The next item with the highest value is an item, B, with a total value of $450 and a weight of 3 kg. However, if this item is selected, the total weight of the selected items will be 12 kg (4 + 5 + 3), which is more than the capacity of the bag.
Therefore, we discard the item, B, and search for the item with the next higher value. The item with the next higher value is an item, A, which has a total value of $400 and a total weight of 2 kg.
However, this item also cannot be selected because if it is selected, the total
weight of the selected items will be 11 kg (4 + 5 + 2). Now, there is only one item left, that is, item, D, with a total value of $50 and a weight of 1 kg.
This item can be selected as it makes the total weight equal to 10 kg.
The selected items and their total values and weights are listed in the following table.
For most of the problems, the greedy algorithms usually fail to find the globally optimal solution. This is because they usually do not operate exhaustively on all data.
They can make commitments to certain choices too early. Hence, it prevents them from finding the best overall solution, later.
This can be seen from the preceding example where the use of a greedy algorithm selects the items with a total value of $1350 only. However, if the items were selected in the sequence depicted by the following table, the total the value would have been greater, with the weight being 10
In the preceding example, you can observe that the greedy approach commits to an item, E, very early. This prevents it from determining the best overall solution, later.
Nevertheless, the greedy approach is useful because it is quick and easy to implement. Moreover, it
often gives a good approximation to the optimal value. | OPCFW_CODE |
What is The Cloud?
Cloud computing, or 'the cloud' is a fancy marketing term that we hear all over the place. There is no specific definition and different companies might mean different things when they claim to provide cloud computing. The term works because it is a promise of having less to worry about. Companies and individuals who use 'the cloud' no longer have to worry about storing data. The data is off in a cloud somewhere and someone else is dealing with it. It is marketing at its finest.
In reality all computer data is stored on a computer somewhere. Cloud computing usually just means storing data somewhere other than a local harddrive. This data can then be accessed over the Internet. Instead of the data residing on the computer of the user the data is often found on the harddrive of a remove server in a data center somewhere.
That's all there is to it. Cloud is just a nicer way of saying remote. When someone says cloud computing, they could just as easily say 'remote computing'.
This does not necessarily make the data more secure or easier to maintain. All it means is that someone else somewhere else is responsible for it. It might be more secure. Data centers usually have good security. There might be backups of the data at multiple data centers. Keeping data backed up in a remote location is always a good idea. The servers on which the data is stored is usually shared by multiple users, meaning that the costs of expensive servers become more affordable. But the servers are still computers just like any other and need to be maintained, just like any other computer.
Paying for cloud computing is paying for the use of someone else's remote computers and someone else's maintenance of those computers. There are plenty of times where this is a great idea. However, one should not expect data to be 100% secure or 100% dependable when it is stored in the cloud. A great example of cloud computing going terribly wrong was CodeSpaces.com. This was a website similar to github or bitbucket where users were able to host projects and code on a remote server. CodeSpaces claimed to be fully resilient and had a "proven" recovery plan in case of data loss. What happened was someone hacked them. The hacker then attempted to extort money from them. When CodeSpaces refused to pay, the hacker destroyed all their data and all their backups. This shut CodeSpaces down and all their customers lost all their data.
CodeSpaces was a respectable cloud backup and hosting solution which failed miserably. Other cloud computing services provide valuable services to their users. Salesforce is a cloud computing company specializing in customer relationship management (CRM). Cloud services are only as good as the people running them.
Cloud computing is great for many things. It is a good way to backup data remotely. It is fantastic if multiple people need access to a single data source. It's good for companies which do not have their own system administrator or IT department to manage local computing. Just be aware that cloud computing is not magic. It is just remote computing. | OPCFW_CODE |
This guide contains information on: Write down each word three or more times Write the translation next to each word. This tells the BASH shell to execute the commands in the script. Someone wants something and people and things keep getting in the way of them achieving the goal.
A modern Aramaic dialect? This may sound like it could take a lot of time — it does.
It is better to create ten projects in one genre than ten projects in different genres. The movies you loved most featured characters that swept you up, who captivated your emotions, got you involved.
Structure your pitch to make it easy to understand. You lack the expert knowledge of any particular area. Your Script Outline — Plot Point 6: First-Act Break The first-act break marks the end of your setup i.
Because of this, oftentimes, the first-act break involves a change in geographical location. They may consult you, or they may not.
For example, they can be used to: Download the Raspberry Pi programming cheat sheet — a one page PDF guide with instructions on how to create and execute C programs, Python programs, and Shell scripts. In a lot of movie plots, the main character has to go on a journey in order to achieve his goal.
It seems impossible for him to accomplish it. What chance does a first time scriptwriter have? You want to preserve your creative freedom.
Creating The Send Script Before we create our shell script, we need to determine whether the user we want to send a message to him is currently logged on the system, this can be done using who command to determine that. When it happens, it may be just done with a look, often improvised on the movie set.
Eighty-seven years later, in the middle of the 19th century, Abraham Lincoln drafted the Gettysburg Address in a cursive hand that would not look out of place today. Yes, you heard me correctly.
To learn about a special midpoint trick, read this.Writing a script outline is easy once you know the 8 plot points in every story.
Learn more about them before writing your next script outline. Now learning non-Roman-alphabet languages is as easy as A-B-C!
Read and Write Urdu Script will help you read and write simple Urdu. This book is a step-by-step introduction to the script that will enable you to read Urdu. It allows browsers to determine if they can handle the scripting/style language before making a request for the script or stylesheet (or, in the case of embedded script/style, identify which language is being used).
Film script writing is an art-form, and creating art is never easy. Every time you watch a TV show, a film, or play a computer game, you’re taking in the work of a scriptwriter. I need to write a PS script to run an installation executable/msi again 20 remote domain servers.
Can someone advise? Thanks. Script Writing: Write a Pilot Episode for a TV or Web Series (Project-Centered Course) from Michigan State University.
What you’ll achieve: In this project-centered course*, you will design a series bible and write a complete pilot episode for.Download | OPCFW_CODE |
programming algorithms and data structures, most of the code does so well, but problems arise when solving large amounts of small problems on multiple platforms (e.g. an ad-hoc system), e.g. testing it on several different devices. A widely used approach to addressing these problems is to treat each CPU as a single processor, while the other CPU (i.e. the GPU) sends a list of the loaded input values and outputs them to the target system. Such a method is referred to as distributed, and the software uses the distribution on one platform. While the requirements that CPU-type processing be performed while sending a list to the GPU have been addressed to some extent using a distributed algorithm, problems still remain. For example, certain GPU processing environments, which require that the GPU send only a minimum of thousands of commands to the click here for info can be inhibited by some external intervention. In addition some external parameters (e.g. buffers are not initialized while storing all received data) can be set to negative in order to avoid problems. Contemporary problems are addressed by using the distributed algorithm. In principle, a system needs to synchronize one or more processes with a central distribution mechanism, and to communicate with each others through SCT processing. However, a central distribution mechanism may limit the ability for processing processes with large numbers of processes, which makes it inefficient. This complexity is known as processor fragmentation or “downtime”, and it prevents any communications across multiple systems. Furthermore, even small changes in a system parameter such as “processor frame statistics” (PFs) or “processor scheduling” (PS) are generally thought “quicksort”, especially where the system has to access a lot of peripheral resources, and there is or may be significant fragmentation in the system geometry. Consequently, to deal with these systems, developers are required to iteratively implement synchronization algorithms and procedures.
technique vs algorithm
Furthermore, more complex mechanisms can cause more practical problems, such as problems with local updates, and a fragmented system. The most current implementation method is to synchronize multiple processes using the central distribution, and to do this many times but fail to inform each other if there are sufficiently many processes at the same time. Application-level processes cannot be set for all processors at all. However they can be set up only for a limited range of processors, e.g. low-priority processors by user instructions, and then used in a distributed manner. Any previous synchronizing technique for local to shared system processes is now still inadequate. By the idea of “stack”, a processor’s stack consists of a set of other processes within the physical CPU. A stack implements the necessary synchronization rules for each processing process (here, a “spin”). The logic is that an entry process is initialized against the main processing stack, and then it’s execute on one of its successors to start a new process. The “spin” thread starts with the entry call of a local event handler. The entry loop’s exit calls a thread that needs to free its static stack and free the other processes. A traditional approach applies to multiple systems in a particular communication model. However, there are still problems with each method and how to address them, and there is a severe problem with synchronization under different operating and system specifications. Integrated circuits provide many functions. The next chapter will discuss the performance of more sophisticated structures, such as the phase synchronizationprogramming algorithms and data structures to generate and store machine learning models and patterns for machine learning applications, (for the purpose of the present description, some of the computational and numerical methods that can be considered are the specialization of some previously published methods), and, if such methods can be used, especially when solving some machine function in the object-oriented fashion, one may be referred to as a “typical Gabor pattern maker” for the purpose of this description. In the next section, this discussion is described in more detail. It will be related to the fact that a functional programming language (or graphical user interface) has very promising potential in the design of a machine analysis framework. In this talk, therefore, I shall refer to the ability to use some of these features of a functional programming language as being the basis of such a method. For example, such a solution is illustrated by example below.
how do you implement an algorithm?
Gabor pattern maker, C/C++: – The concept of a graphical pattern maker as a class diagram and a function that carries out an application can be used to pattern a graphical pattern based on the patterns being implemented. For example, an object of a pattern artist is also created as a class diagram and see post be modified according to the pattern artist. This is a known and very effective method to be applied to problem graphics in a computer language. In a pattern maker application, the pattern piece is drawn graphically using a graphic model and an object is also drawn as a relationship. The idea of the pattern maker is that a pattern artist can draw a pattern piece such that the object such that a pattern piece lies on the screen is also present on the screen. An object is also drawn on the screen (indirectly) via a pointer in an object screen (indirectly). This graphical pattern maker is termed an “object object” for purposes of subsequent discussion. – An object-oriented pattern maker can be defined at its own work. More details relating to the concept of pattern maker can be seen in A4, “Computer Pattern Maker, RENUTT” Journal abstracts about a pattern maker framework and its conceptual structure. Several pattern makers are based upon a pattern artist and a pattern maker framework. – A pattern maker definition file can be created in C++ that consists of three parts: a pattern artist, a pattern maker file base, and an object model. Each part of the file file can name or set the layout of the app and is typically written as a comma-separated multi-dimensional vector. Each part of a file or a fileBase of any type is a unique named vector containing all the parts of the file, including its individual names, xY, yX, WY, wY. The fileBase is typically a one-to-one matrix or other data structure. In some cases in these examples, the structure of the object model is referred to as a vector. In certain applications this type of fileBase can be represented and a file fileBase may be created as a matrix that can be written according to a similar sequence of common code. In many examples, each fileBase is different from the other work files. – A pattern maker fileBasic information about the application and its core features can be found in some of our previous book An Introduction to PatternMaker. The paper starts with a system model that abstracts away the needs of graphical pattern makers and, where necessary, combines the needs of a pattern maker and a pattern maker framework, Homepage as you can try here single data structure based on common functions such as a pointer, x Y. The model description is not yet up to date and the structure of the model is as yet not complete.
how to do algorithms
In the remainder of this talk, we will use some of the basic pattern makers. C/C++: – The concept of a computer-based pattern maker has been explored over and over again in the industrial field. In 1806 The International Congress on Computer Science presents the C/C++ pattern maker framework, which had been used in the United States for the last ten years. Most researchers are familiar with the system in that the approach to constructing the patterns was developed during a project in Chicago in which a pattern maker was requested by the construction team. Though the project was initially taken as a toy project, the approach to being developed by the modelers took on several new roles. For example, the level design canprogramming algorithms and data structures are required for a data structure to facilitate efficient calculation, robust representation, and reliable assignment of the control variables that can subsequently be compared to those for which the control variables are irrelevant. Furthermore, the operation and maintenance of these algorithms and data structures are expensive and may also be unable to inform the control values of the controllers, the quantizers, etc. In order for manual calculation of the control parameters and quantizer outputs, the calculation of individual and aggregate (or group) variables is costly and more complex. For the purpose of determining the overall flow of variables, the special info equations are used for the calculation of the overall controller flows: f(x) += r x + const x T y l b = 1 + z + s h p f(x) For the following formulas, the controller states as; #1 b = 1 − A Δs a2 r + B a3 sin s a4 a5 b6 Δh a5 r2r = 1 + a3 ( a2 + b6 h6 ) For the following formula that returns to a control value during a row/column sum (i.e., a: 10, G: 25), the integral of the controller for the entire row/column sum is: | OPCFW_CODE |
As mentioned previously the simplistic (lazy) introductory forum for #edc3100 didn’t achieve it’s ill-defined goals. I need to find a new one.
Given I hate ice-breaker activities, I doubt this is going to be very creative. Plus time is against me.
#edc3100 is a 3rd year course for pre-service teachers trying to engage them in the task of using ICTs in their teaching. The students are all required to have created their own blog and engage with other social media.
The primary goal is to encourage students to make connections with others. To find out who might be good to follow.
A secondary goal could be to see some different ICTs in action and be required to actually use them for a purpose. This experience can provide grist for reflection.
This list of 10 icebreakers includes an idea for students creating a trading card of themselves using a tool from Big Huge Labs. It moves beyond the textual, requires the students to engage with a new service. The purpose of the slide is a question. Something about them, something about experiences/perspectives of ICTs?
This from Curtin University provides a bit more in the way of design principles from the literature. Interestingly, I’m not certain that the suggested activities are always a good fit for the design principles. e.g. how does creating a video bio (or a trading card as above) “require the learners to read each others entries” which is one of my problems.
This page has some background on ice-breakers and a few suggestions. One is to require students to find 3 people with whom they have something in commmon and comment on those posts. This could could work in a Moodle discussion forum with activity completion.
Actually much of the rest of week 1 is focused on students applying Toolbelt theory/TEST framework to their own study habits. This suggestion might be a bit of duplication, but it may also be a good lead in…mmm.
Another case of possible duplication. Later in the semester we do a Flickr/image activity based around the weather borrowed from @courosa.
The activity from last year required students to create the introduction on their blog. The post to the discussion forum only included a link to the student’s blog post. This creates the problem of having to click through to the blog post. You can’t see anything interesting in the discussion forum. This was probably a factor in the use of the forum.
This and the above suggests some principles
- Have the forum post contain something interesting (i.e. actual information about the student).
- Use the activity completion to require looking and commenting on others.
- Rather than limit to just text, have some form of multimedia involved.
- Have some link to their blog linked to the activity (perhaps reflecting on the task of using the specific ICT)
This is leaning back towards the activity we used in 2012 – borrowed from ECMP355 and @courosa again – with the addition of asking the students to find someone they have something in common with and someone they are very different from.
I’d actually done much of this prior to seeing the suggestion from @catspyajamasnz, now I’m pondering tweaking it a bit. Have them add in “One thing that annoys me about learning at USQ” — sounds like a plan. | OPCFW_CODE |
import time
import wx
import xapian
from threading import Thread
from stop_words import stop_words
class NoteEntryFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, None, title="Note Entry", size=(300, 300))
self.db_path = "db"
self.InitUI()
def InitUI(self):
panel = wx.Panel(self)
# General GUI layout
hbox = wx.BoxSizer(wx.HORIZONTAL)
sizer = wx.FlexGridSizer(3, 2, 9, 25)
subject = wx.StaticText(panel, label='Subject')
note = wx.StaticText(panel, label='Note')
self.subject_text = wx.TextCtrl(panel)
self.note_text = wx.TextCtrl(panel, style=wx.TE_MULTILINE)
sizer.AddMany([(subject), (self.subject_text, 1, wx.EXPAND),
(note), (self.note_text, 1, wx.EXPAND)])
sizer.AddGrowableRow(1, 1)
sizer.AddGrowableCol(1, 1)
hbox.Add(sizer, proportion=1, flag=wx.ALL|wx.EXPAND, border=15)
panel.SetSizer(hbox)
# Accelerator features
save = wx.NewId()
open = wx.NewId()
self.Bind(wx.EVT_MENU, self.onCtrlS, id=save)
self.Bind(wx.EVT_MENU, self.onCtrlO, id=open)
self.accel = wx.AcceleratorTable(
[(wx.ACCEL_CTRL, ord('S'), save),
(wx.ACCEL_CTRL, ord('O'), open)])
self.SetAcceleratorTable(self.accel)
self.Show()
self.ueg()
def onCtrlS(self, e):
self.index(self.db_path)
def onCtrlO(self, e):
pass
def index(self, db_path="db"):
subject = self.subject_text.GetValue()
note = self.note_text.GetValue()
now = time.ctime()
db = xapian.WritableDatabase(db_path, xapian.DB_CREATE_OR_OPEN)
indexer = xapian.TermGenerator()
stemmer = xapian.Stem("english")
indexer.set_stemmer(stemmer)
doc = xapian.Document()
doc.set_data(note)
indexer.set_document(doc)
indexer.index_text(subject)
indexer.index_text(note)
indexer.index_text(now)
db.add_document(doc)
self.note_text.Clear()
def ueg(self):
p_search = PassiveSearchThread(self)
class PassiveSearchFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, title="Passive Search", size=(300,300))
self.parent = parent
self.InitUI()
self.timer = wx.Timer(self)
self.Bind(wx.EVT_TIMER, self.compare, self.timer)
self.timer.Start(1500)
def InitUI(self):
panel = wx.Panel(self)
# General GUI layout
hbox = wx.BoxSizer(wx.HORIZONTAL)
sizer = wx.FlexGridSizer(2, 3, 9, 25)
result = wx.StaticText(panel, label='Result')
self.result_text = wx.TextCtrl(panel, style=wx.TE_MULTILINE)
sizer.Add(result, 1, wx.EXPAND)
sizer.Add(self.result_text, 1, wx.EXPAND)
sizer.AddGrowableRow(0, 2)
sizer.AddGrowableCol(1, 2)
hbox.Add(sizer, proportion=1, flag=wx.ALL|wx.EXPAND, border=10)
panel.SetSizer(hbox)
self.init_string = ''
self.Show()
def pre_process(self, query_string):
processed = ''
query_string = query_string.split(' ')
for word in query_string:
if word not in stop_words:
processed += word + ' '
return processed
def compare(self, e):
query_string = self.parent.note_text.GetValue()
if self.init_string != query_string:
self.search(e)
self.init_string = query_string
def search(self, e):
database = xapian.Database(self.parent.db_path)
enquire = xapian.Enquire(database)
query_string = self.parent.note_text.GetValue()
query_string = self.pre_process(query_string)
qp = xapian.QueryParser()
stemmer = xapian.Stem("english")
qp.set_stemmer(stemmer)
qp.set_database(database)
qp.set_stemming_strategy(xapian.QueryParser.STEM_SOME)
query = qp.parse_query(query_string)
enquire.set_query(query)
matches = enquire.get_mset(0, 10)
final = ''
for m in matches:
final = final + m.document.get_data() + "\n"
self.result_text.SetValue(final)
class PassiveSearchThread(wx.Frame, Thread):
def __init__(self, parent):
Thread.__init__(self)
self.parent = parent
self.run()
def run(self):
p_search = PassiveSearchFrame(self.parent)
if __name__ == '__main__':
app = wx.App()
NoteEntryFrame(None)
app.MainLoop()
| STACK_EDU |
Have you ever struggled to correctly translate an audio recording into text? Transcribing voice into text has never been simpler thanks to advances in artificial intelligence.
Artificial intelligence-based transcription models are algorithms that turn speech into text. They are frequently utilized in many different applications, including speech-to-text programs, captioning, and dictation software.
AI transcription models, however, are not flawless and frequently make mistakes. Accuracy is essential for these models since misunderstanding and misinterpretation might result from this.
That’s why, in this blog, we will explain AI transcription techniques for improving the accuracy of AI transcription models.
Data cleaning is a crucial step in raising transcription model accuracy using AI. This entails purging any unnecessary or noisy data from the dataset that will be used to train the model. The performance of the model might be significantly impacted by irrelevant or noisy data, resulting in reduced accuracy.
Rotation, scaling, and cropping are a few of the techniques that may be used to create new data samples from the current data. By doing so, the dataset is enhanced, and the model is strengthened against various speech variances.
Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), and Transformer models are a few examples of model architectures that may be applied to AI transcription models. The best architecture for the task at hand must be chosen because each of these designs has strengths and limitations of its own.
For instance, RNNs are frequently employed in AI transcription models because they are effective for sequential input, such as voice.
The use of CNNs for transcription models is uncommon since they are more suitable for image recognition applications.
Transformer models, which have a more modern architecture, have grown in favor recently and have been demonstrated to be effective on a variety of tasks, including transcription.
Following the best practices of AI transcription the accuracy can be increased in part by training the model on a wide and varied dataset. The more data the model gets to learn from, the better it performs. This is true regardless of the dataset’s size.
A diversified dataset also aids in the model’s improved adaptation to various speaking voice inflections and accents.
A critical step in enhancing an AI transcription model’s accuracy is fine-tuning the hyperparameters. The parameters known as hyperparameters determine how the model performs and behaves.
Aspects like learning pace, number of hidden layers, and dropout rate are within its control. By enabling the model to comprehend the data it is processing better, fine-tuning these hyperparameters can help the model be more accurate.
Grid Search, Random Search, and Bayesian Optimization are a few popular methods for fine-tuning hyperparameters. While Random Search is a more random method to hyperparameter optimization, Grid Search entails experimenting with various combinations of hyperparameters. On the other hand, Bayesian Optimization employs statistical models to direct the search process, increasing its efficiency.
Another method for raising an AI transcription model’s accuracy is data augmentation. By applying random changes to the current data, the training data set is fictitiously enlarged. By learning from additional instances, the model is better able to generalize to fresh, unexplored data.
Flipping, cropping, and adding noise to the data are a few frequent data augmentation methods.
Transfer learning is a method for training new models by utilizing previously taught models as a starting point. As a result of the pre-trained model having previously picked up on many of the characteristics and patterns in the data, this can boost the new model’s accuracy.
Looking to Develop an AI-based Solution for Your Business?
Get in touch with us. We develop AI-based solutions as per your business requirements.
The lack of high-quality training data is one of the main obstacles to enhancing the accuracy of AI transcription models. Additionally, it may be challenging to reach high levels of accuracy due to the wide variety of accents, background noise, and speech patterns.
Because multiple audio codecs may capture different features of the audio, using those can help AI transcription models become more accurate. For instance, incorporating both stereo and mono audio can assist in teaching the model how to manage various audio inputs.
The accuracy of AI transcription models can be significantly impacted by the length of audio recordings. Less variety in accents, background noise, and speech patterns may be present in shorter audio files, which makes it more difficult for the model to generalize to new data.
Longer audio recordings may also be more challenging to analyze, which might slow down training and reduce accuracy.
The effectiveness of AI transcription models can be greatly affected by the framework that is used. Different frameworks have various advantages and disadvantages, and some are more appropriate for particular sorts of issues than others.
For instance, certain frameworks could be better adapted to handle background noise than others, while others might be tuned for voice recognition.
In conclusion, enhancing an AI transcription model’s accuracy needs a variety of methods, including data cleaning, data augmentation, fine-tuning hyperparameters, and utilizing transfer learning. The accuracy of AI transcription models may be greatly increased with the proper strategy, increasing their reliability and use for a variety of applications.
At spaceo.ai, our team of professionals is committed to providing our clients with high-quality solutions and has considerable expertise in creating and optimizing AI models. We would be delighted to hear from you if you’re wanting to improve the accuracy of your AI transcription model or if you require assistance with any other software development requirements.
What to read next | OPCFW_CODE |
304 North Cardinal St.
Dorchester Center, MA 02124
Core solutions encompass a wide array of tools and services from Microsoft Azure. In this learning path, you’ll be introduced to many of these tools and services and will be asked to help choose the best one for a given business scenario.
By the end of this learning path, you’ll be able to:
Q1. A company wants to build a new voting kiosk for sales to governments around the world. Which IoT technologies should the company choose to ensure the highest degree of security?
Q2. A company wants to quickly manage its individual IoT devices by using a web-based user interface. Which IoT technology should it choose?
Q3. You want to send messages from the IoT device to the cloud and vice versa. Which IoT technology can send and receive messages?
Q1. You need to predict future behavior based on previous actions. Which product option should you select as a candidate?
Q2. You need to create a human-computer interface that uses natural language to answer customer questions. Which product option should you select as a candidate?
Q3. You need to identify the content of product images to automatically create alt tags for images formatted properly. Which product option is the best candidate?
Q1. You need to process messages from a queue, parse them by using some existing imperative logic written in Java, and then send them to a third-party API. Which serverless option should you choose?
Q2. You want to orchestrate a workflow by using APIs from several well-known services. Which is the best option for this scenario?
Q3. Your team has limited experience with writing custom code, but it sees tremendous value in automating several important business processes. Which of the following options is your team’s best option?
Q1. Which of the following choices would not be used to automate a CI/CD process?
Q2. Which service could help you manage the VMs that your developers and testers need to ensure that your new app works across various operating systems?
Q3. Which service lacks features to assign individual developers tasks to work on?
Q1. As an administrator, you need to retrieve the IP address from a particular VM by using Bash. Which of the following tools should you use?
Q2. You’re a developer who needs to set up your first VM to host a process that runs nightly. Which of the following tools is your best choice?
Q3. What is the best infrastructure-as-code option for quickly and reliably setting up your entire cloud infrastructure declaratively?
Q1. You want to be alerted when new recommendations to improve your cloud environment are available. Which service will do this?
Q2. Which service provides official outage root cause analyses (RCAs) for Azure incidents?
Q3. Which service is a platform that powers Application Insights, monitoring for VMs, containers, and Kubernetes?
I hope this Microsoft Azure Fundamentals: Describe Core Solutions and Management Tools on Azure Microsoft Quiz Answers would be useful for you to learn something new from this problem. If it helped you then don’t forget to bookmark our site for more Coding Solutions.
This Problem is intended for audiences of all experiences who are interested in learning about Data Science in a business context; there are no prerequisites.
More Coding Solutions >> | OPCFW_CODE |
You are getting BCFIPS and other Java security-related exceptions when starting an instance using mvn after a Jive instance upgrade. Here are some possible symptoms
- Possible errors seen in the logs
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'groupLinksV2': Cannot resolve reference to bean 'containerLinksV2' while setting bean property 'sourceList' with key ; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'containerLinksV2': Cannot resolve reference to bean 'altLinkConverterV2' while setting bean property 'sourceList' with key ;....
[com/jivesoftware/community/aaa/sso/saml/spring-samlContext.xml]: Invocation of init method failed; nested exception is java.security.KeyStoreException: JCEKS not found
- Attempt to start the instance with mvn
c:\java\maven\3.2.1\bin\mvn -Djavax.net.ssl.trustStoreType=bcfks -Djavax.net.ssl.trustStoreProvider=BCFIPS -Djavax.net.ssl.keyStoreProvider=BCFIPS -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=C:/dev/coretto_jdk1.8.0_242/jre/lib/security/cacerts -Djavax.net.ssl.keyStore=C:/dev/coretto_jdk1.8.0_242/jre/lib/security/cacerts -Djavax.net.ssl.keyStorePassword=changeit -Djive.devMode=true -DskipTests=true -P dev,int,jive.archiva deploy -s ..\settings.xml
- Ensure the correct version of Amazon Corretto JDK is installed.
- After the installation please check that the "JAVA_HOME" environment path is correctly set. For windows the JAVA_HOME environment will be selected by the windows installer. If not, you will need to explicitly do it. Assuming that the JDK was installed at "C:\Program Files\Amazon Coretto\jdk1.8.0_242" then that will be the path to your "JAVA_HOME" environment variable. Please note that this path should not be the "bin" or "lib" folder but the root folder of the JDK.
- Ensure that the Jive Local Environment Setup is correct.
- Ensure that the pom.xml file is referencing the correct Jive version.
- E.g. if you upgraded to Jive 9.5.0 from Jive 9.0.5, the Pom.xml previously had an entry “<version>188.8.131.52-0-SNAPSHOT</version>”.
- This should be updated to <version>184.108.40.206-0-SNAPSHOT</version>
Ticket 2541208 where this issue was discussed in detail. | OPCFW_CODE |
Dataset for the transcriptome analysis of hippocampal subfields identifies gene expression profiles associated with long-term active place avoidance memory
Cite this dataset
Harris, Rayna et al. (2020). Dataset for the transcriptome analysis of hippocampal subfields identifies gene expression profiles associated with long-term active place avoidance memory [Dataset]. Dryad. https://doi.org/10.25338/B8QS4F
The hippocampus plays a critical role in storing and retrieving spatial information. By targeting the dorsal hippocampus and manipulating specific “candidate” molecules using pharmacological and genetic manipulations, we have previously discovered that long-term active place avoidance memory requires transient activation of particular molecules in dorsal hippocampus. These molecules include amongst others, the persistent kinases Ca-calmodulin kinase II (CaMKII) and the atypical protein kinase C isoform PKC iota/lambda for acquisition of the conditioned behavior, whereas persistent activation of the other atypical PKC, protein kinase M zeta (PKM zeta) is necessary for maintaining the memory for at least a month. It nonetheless remains unclear what other molecules and their interactions maintain active place avoidance long-term memory, and the candidate molecule approach is both impractical and inadequate to identify new candidates since there are so many to survey. Here we use a complementary approach to identify candidates by transcriptional profiling of hippocampus subregions after formation of the long-term active place avoidance memory. Interestingly, 24-h after conditioning and soon after expressing memory retention, immediate early genes were upregulated in the dentate gyrus but not Ammon’s horn of the memory expressing group. In addition to determining what genes are differentially regulated during memory maintenance, we performed an integrative, unbiased survey of the genes with expression levels that covary with behavioral measures of active place avoidance memory persistence. Gene Ontology analysis of the most differentially expressed genes shows that active place avoidance memory is associated with activation of transcription and synaptic differentiation in dentate gyrus but not CA3 or CA1, whereas hypothesis-driven candidate molecule analyses identified insignificant changes in the expression of many LTP-associated molecules in the various hippocampal subfields, nor did they covary with active place avoidance memory expression, ruling out strong transcriptional regulation but not translational regulation, which was not investigated. These findings and the data set establish an unbiased resource to screen for molecules and evaluate hypotheses for the molecular components of a hippocampus-dependent, long-term active place avoidance memory.
To examine spatial learning and memory, we used a well-established active place avoidance paradigm. Littermates were randomly assigned to one of our treatment groups (standard-trained, n=8; standard-yoked, n=8; conflict-trained, n=9; conflict-yoked, n=9. All mice were exposed to nine 10-min trials in the active place avoidance arena. Mice were placed on an elevated circular 40-cm diameter arena made of parallel bars that rotated at 1 rpm. The arena wall was transparent and thus contained the mouse on the arena while allowing it to observe the environment. The location of the mouse in the arena was determined from an overhead digital video camera interfaced to a PC-controlled tracking system (Tracker, Bio-Signal Group Inc., Acton, MA). Trained mice in the active place avoidance task are conditioned to avoid the location of mild shocks (constant current 0.2 mA, 500 ms, 60 Hz) that can be localized by visual cues in the environment. Yoked-control mice are delivered the identical sequence of shocks that was received by a particular trained mouse, the difference being that for the yoked mice, the shocks cannot be avoided or localized to a portion of the environment. Mice are allowed to become familiar with walking on the rotating arena during a pretraining trial with no shock. Then each mouse received three training trails separated by a 2-h inter-trial interval. The mice were returned to their home cage overnight. The next day, each mouse received a “Retest trial” with the shock in the same location as before. For the next three training trials, the shock zone remains in the same place for standard-trained animals but is relocated 180° for the conflict-trained mice. The next day, all mice receive a memory “Retention trial” with the shock off to evaluate the strength of the conditioned avoidance.
A day after the last training session, and 30 minutes after the retention session without shock, mice were anesthetized with 2% (vol/vol) isoflurane for 2 minutes and decapitated. Transverse 300 μm brain slices were cut using a vibratome (model VT1000 S, Leica Biosystems, Buffalo Grove, IL) and incubated at 36°C for 30 min and then at room temperature for 60-90 min in oxygenated artificial cerebrospinal fluid (aCSF in mM: 125 NaCl, 2.5 KCl, 1 MgSO4, 2 CaCl2, 25 NaHCO3, 1.25 NaH2PO4, and 25 Glucose). Slices were cut in half so that one hemisphere could be used for RNA-seq and one for ex vivo slide physiology.
For RNA-sequencing, the DG, CA3, CA1 subfields were micro-dissected using a 0.25 mm punch (Electron Microscopy Systems) and a Zeiss dissecting scope. RNA was isolated using the Maxwell 16 LEV RNA Isolation Kit (Promega). RNA libraries were prepared by the Genomic Sequencing and Analysis Facility at the University of Texas at Austin and sequenced on the Illumina HiSeq platform. Reads were processed on the Stampede Cluster at the Texas Advanced Computing Facility. Quality of raw and filtered reads was checked using the program FASTQC (Wingett and Andrews, 2018) and visualized using MultiQC (Ewels et al., 2016). We obtained 6.9 million ± 6.3 million reads per sample. Next, we used Kallisto to pseudo-align raw reads to a mouse references transcriptome (Gencode version 7), which yielded 2.6 million ± 2.1 million reads per sample. Mapping efficiency was about 42%. Transcript counts from Kallisto were imported into R and aggregated to yield gene counts using the gene identifier from the Gencode transcriptome. DESeq2 was used to normalize and quantify gene expression with a false discovery corrected (FDR) p-value < 0.1. ShinyGo was used to identify Gene Ontology terms associated with genes that are correlated with PC1. All genes associated with particular GO terms were identified using the Gene Ontology Browser . We compared the GO terms for candidate genes, differentially expressed genes, and a list of genes identified as important for long-term potentiation. We relied on the R packages ggplot2, cowplot, and corrr for data visualization.
Spatial behavior was evaluated by automatically computing (TrackAnalysis software (Bio-Signal Group Corp., Acton, MA) 26 measures that characterize a mouse’s use of space during the trial. All statistical analyses were performed using R version 3.6.0 (2019-04-26) -- "Planting of a Tree”, relying heavily on the software from the tidyverse library. Principal component analysis (PCA) was conducted to reduce the dimensionality of the data. One- and two-way ANOVAs were used to identify group differences in behavioral measures across one or multiple trials, respectively. For statistical analysis of gene expression, we used DESeq2 to normalize and quantify gene counts with a false discovery corrected (FDR) p-value < 0.1. DESeq2 models evaluated gene expression differences either between the four behavioral treatment groups (standard-trained, standard-yoked, conflict-trained, and conflict-yoked) or between the combined memory-trained and combined yoked-control groups.
Raw sequence data and differential gene expression data are available in NCBI's Gene Expression Omnibus Database (accession: GSE99765).
National Institute of Neurological Disorders and Stroke, Award: NS091830
National Science Foundation, Award: IOS-1501704
National Institute of Mental Health, Award: 5R25MH059472-18 | OPCFW_CODE |
Help to understand some phrases in the sentence
I don't quite understand some phrases in this sentence.
President Donald Trump has responded to provocations from the reclusive nation with bombastic rhetoric, at one point threatening North Korea with "fire and fury."
At one point threatening North Korea with "fire and fury" -- Is it an adverbial phrase as a whole? If yes, what's it modifying?
Does 'at one point' here mean 'at one hand'?
'fire and fury' I got an explanation for this phrase from a website as below. But still not quite get it. Can someone give a brief or more easier way to understand it?
“Fire and Fury” is the 2017 administration’s rendition of “Shock and Awe”. It is a pithy way of conveying that you intend to use considerable overwhelming military force to accomplish a given objective. The advantage of a phrase like this is that it is not very definitive. So, there is plenty of wiggle room as to what constitutes ‘fire and fury’, in the event of hostilities. It sounds sooooo much more elegant than “we’ll shoot back” or “we’re gonna bomb you”. Plus, it’s fodder for CNN commentators. They can scratch their chins and opine as to just exactly what constitutes the Trump administrations F&F policy. It makes for great entertainment all around!
By the way, what's the meaning of "scratch their chins" in the above quote?
Answer part 1) Using a phrase like, "fire and fury" has a long history in the English language. You'll see pairs like this used over and over, especially in legal language, where they're called "legal doublets."
The reason this happens is because back in the year 1066 France invaded England and took the place over. This created a situation where most of the ruling class spoke French or Latin, and most of the peasants spoke Anglo Saxon. In order to be understood clearly, they'd often pair a French word with an Anglo Saxon word, and eventually it became a common way of speaking to combine synonyms to emphasize a point.
Some common pairs are:
plain and simple
peace and quiet
neat and tidy
over and above
Answer part 2) "At one point" in this context is referencing the fact that Trump has made more than one statement, and in one of them he made the fire and fury comment.
Using the word, "Hand" generally implies two sided statements that contradict themselves. You have two hands... a right and a left. You have two opinions - you like someone, and you hate someone.
In this case Trump is not going back and forth between loving and hating the North Koreans, so using the word "hand" wouldn't fit.
Answer part 3) Again, "fire and fury" are word pairs that imply a certain type of military action - here they're basically implying that the US will destroy everything without holding themselves back.
Answer part 4) Scratch their chins. If you're scratching your chin, it usually implies that you're thinking about and considering something. Trump said, "fire and fury." Did he really mean he'd nuke the whole place, or does he mean something else? What did Trump really mean when he said, "fire and fury?" Nobody really knows except for Trump and so the commentators can only guess what he really means.
| STACK_EXCHANGE |
There is no such thing as the perfect programming language, no matter which one it is. Given that all programming teams are made by and of crazy people, this should not come as a surprise. Not only does your programming language suck, the framework you use sucks as well. The art of putting together thousands and thousands of bits in an effort to tell computers what to do is one huge pile of suck. Don’t think your programming language or framework sucks? Let’s investigate.
Things That Suck About AngularJS
- Documentation Sucks. On a positive note, this had led to many beginner tutorials on the internet. Don’t get too excited though, those suck too.
- DOM integration and directives Suck.
- Business Logic Sucks.
- Filter Caching Sucks.
- 3rd-Party integration Sucks.
- Its Hard As Hell.
Things That Suck About NodeJS
- Callback Hell.
- It actually does block.
- It doesn’t Scale.
- You will eventually abandon it.
So umm, yeah – Good luck with that Toy you call NodeJS.
Things That Suck About jQuery
- Overloaded Methods.
- You don’t even need it.
- It’s slow.
- You’ll do things you shouldn’t.
- this has different meaning.
Things That Suck About Python
- Global Interpreter Lock.
- Indentation Matters.
- People just don’t like it.
- copy.copy ?
- Ducks are for hunting.
- helps you make bugs.
Things That Suck About Django
Much like the other examples in this article, once we have identified a programming language that sucks, it is very easy to apply the same principles to their most loved(hated) frameworks written in the language. This brings us to Django, and why it sucks.
- Django is monolithic.
- It forces tightly coupled apps.
- Heavy reliance on Django ORM.
- Steep learning curve.
- All components get deployed together.
- Template Engine Sucks
Good luck with Django, where you can visit any time you like, but you can never leave.
Things That Suck About Flask
Really not much to say about Flask, except that is not much more than an April Fool’s Joke.
Things That Suck About Ansible
Things That Suck About Java
Now for one of my favorite programming languages to hate on, Java! It Sucks! In fact, it sucks so bad, that there is a list 29 bullet points long that cover every nasty detail of why it sucks so bad. Things such as:
- Garbage Collection will freeze Java.
- Exceptions and Value Checking are broken.
- Java does not handle static class members correctly.
- The syntax is so verbose you will have carpal tunnel syndrome after your first project.
- The dreaded CamelCase.
- Iterators Suck.
- Methods and Classes of the same name.
- Best Java Blog Ever.
- Write Once, Run Away.
- It’s a big fat bloated mess.
Things That Suck About Spring MVC
Things That Suck About Android
Things That Suck About Ruby
Sometimes, someone just lobs you a softball – and you swing and knock it out of the park. In this case, Ruby is up for sucking, and it excels at the suck like none other.
- It suffers from Perlisms.
- WTF is a downcase?
- Even it’s creator thinks it sucks.
- Automatic Returns. Good luck with that.
Things That Suck About Ruby on Rails
- Rails Lies.
- Rails is full of junk.
- It turns people into idiots.
- Noobs can’t even get it installed.
- Rails is a Ghetto.
- Rails Doesn’t Scale.
- Rails is yesterday’s software
Things That Suck About PHP
PHP takes sucking to a whole new level. There are entire armies of vagrants throwing PHP developers into lit fireplaces for the sins of worshiping at the alter of PHP. PHP sucks so bad, it’s hard to know where to start. We will try our best however.
- It sucks so bad, it’s not even funny.
- Virtually every feature of PHP is broken in some way.
- ‘0’, 0, and 0.0 are false, but ‘0.0’ is true
- Too much random stuff!
- Null Bytes and Serialization.
- Unable to fix bugs since old sites depend on those bugs to continue bugging.
- OOP in PHP Sucks.
- The PHP Community Sucks.
Things That Suck About Laravel
Things That Suck About Symfony
Things That Suck About Codeigniter
- Autoloading Sucks.
- htaccess sucks.
- The people who wrote it couldn’t wait to get rid of it.
- Nobody uses it anymore. Good Bye Community.
Things That Suck About C
- C has no strings.
- 99% of all C programs will have memory leaks.
- C is not memory safe.
- Unrestricted pointers are dangerous.
- C Programs are akin to criminal negligence.
Things That Suck About Linux
You may think Linux has, or is on the verge of, saving the world. And you would be wrong about that, because Linux Sucks. After all, it is written in C, a sucky programming language. So from sucky programming languages come sucky operating systems.
- Linux is more complicated than you can handle.
- Even Microsoft Windows is better than Linux.
- Linux Networking Sucks.
- Software is better when free.
- If Linux is the future, the future is awful.
The Sucky Conclusion
Did you happen to notice anything about the languages and frameworks we discussed in this post, other than the fact that they all suck? The most sucky languages and frameworks appear at the top of the article, and move downward from there. Now, the funny thing is, is that the languages and frameworks also follow a similar pattern of popularity according to Google Trends, Github star count, and Tiobe rankings, and other factors. This brings us to this most ironic conclusion: The most popular languages, suck the most. You could also say it this way, the programming languages that suck the most, are the most popular. You see how that works? So if you see your tool or framework in this list, do not be dismayed. Consider it the highest praise and compliment. If your language or framework did not show up on this list, you have some work to do. If it’s not here, people don’t care about it. People don’t start to bitch and moan until there is some level of widespread adoption. In fact, the more a programming language or tool is used, the more people will complain. It’s just the nature of things in this industry. | OPCFW_CODE |
VB and C# Coevolution
As we are approaching the release of VS 2010, I have seen a number of questions from customers about our language strategy for VB and C#. We made a shift in strategy at the beginning of this release cycle and have been talking about it publicly for some time. A lot of this discussion has been in forums that have a lot of early adopters, for example at PDC and other conferences, so I suppose it’s natural that we continue to see these questions. I thought it would be valuable to share my thoughts on this in blog form.
For starters, I should explain who I am and what my role is in this area. I’m the Product Unit Manager for Visual Studio Languages. In this role, I manage a portfolio of .NET languages (VB, C#, F#, IronPython, IronRuby) and the Dynamic Languages Runtime. I have a long history with VB (I interned on VB 1.0 before working on OLE Automation, VBA, VB4, and VBScript) and C# (I am one of the original C# language designers).
VB and C# both enjoy broad adoption. The most reliable numbers we have on the two languages show roughly equal adoption for the two. Together, these two languages represent the vast majority of .NET usage. As such, they are critical to our long-term developer strategy.
Our strategy for VB and C#, beginning with VS 2010, is a coevolution strategy. This is not the typical strategy for a portfolio of items. The more traditional portfolio strategy is to differentiate them, as P&G does for laundry detergents. For several versions, we tried to do this. We had an explicit strategy of differentiating VB and C#. We wanted VB to appeal to VB6 developers, who tended to build business-oriented, data-focused solutions. We wanted C# to appeal to “curly brace developers”, including C++ and Java developers, where there were more enterprise-class and ISV solutions. In practice, we found that it was quite hard to differentiate the two, due to the presence of several powerful unifying forces, which I describe below.
A modern developer experience for a language is formed through a combination of elements:
· A “horizontal” runtime like .NET that provides runtime services and libraries that are broadly applicable.
· A “horizontal” IDE platform or shell
· A set of “vertical” platforms and tools for building various kinds of software – Windows, Web, Device, Database, and on and on
· The language and associated language-specific tooling, e.g., IntelliSense and refactoring
Three of the four items above are common building blocks for VB and C#. This is a significant departure from pre-.NET products, where all of these were differentiated. “Classic” VB was differentiated across all four bullet points – it had its own runtime, its own shell, its own designers, and its own language. For VB .NET and VC#, the shared elements (the first three bullets) deliver a huge part of the overall developer experience. These common IDE and platform building blocks are the first “powerful unifying force” that I am talking about. For there to be language-based differentiation, it has to come from the fourth bullet point – the language and its associated tooling.
The second “powerful unifying force” is the nature of the languages themselves – they are both object-oriented languages and both have strong static type systems. So at a high-level, they are in the same family of languages. In contrast, some other languages in our .NET portfolio share a lot of the same building blocks but are farther afield from a language perspective – F# (functional), Python (dynamic) and Ruby (dynamic). As a practical matter, I rarely get asked why we have both C# and F# J.
There is a third “powerful unifying force”. As we began to evolve the languages after .NET 1.0, we found that the most significant opportunities were on the border between the languages and API’s. Our languages and runtime provide a set of building blocks, and API developers compose these to produce API’s. One way I think about this is that there are two kinds of language features: “on the outside” language features that grow or improve the set of API building blocks, and “on the inside” language features whose scope of impact is limited to the language itself. “On the outside” features include generics and the LINQ language features. “On the inside” features include changes to statements, expression and control flow. If we invented a new looping construct, that would be an “on the inside” feature – it would not impact API developers except perhaps as an implementation detail. We have done several releases since .NET 1.0, and in practice we have found that the best opportunities for language evolution and innovation have been in “on the outside” language features rather than “on the inside” ones. The most significant advances have been in “on the outside” features.
API designers are of course interested in having their API’s used by the broadest set of languages. To ensure this, we designed a Common Language Specification as part of .NET 1.0, and have evolved it in subsequent versions as we have added significant new building blocks. This approach helps us ensure that .NET API’s from Microsoft and others are accessible to a wide variety of languages. In practice, this also ensures that language evolution “on the outside” of languages cannot be used to differentiate languages. Thus, the third “powerful unifying force” is the Common Language Specification and trends in language innovation toward “on the outside” innovation rather than “on the inside” innovation.
Finally, we found that differentiation of language-specific tooling typically resulted in mixed feedback from customers. When we did a feature for one language but not the other, we received positive feedback from the language audience that got the feature, and negative feedback from the other. We found that the VB and C# customer bases were somewhat different, but not different enough so that they would want language tooling that was different. There might be differences in the priority of a particular feature (for example, edit-and-continue debugging for VB vs. refactoring for C#), but that in the long run, both customer bases would want the union of the features. Thus, the fourth “powerful unifying force” is customer feedback on language tooling.
For these reasons, we have adopted an explicit strategy of coevolution for C# and VB. By doing so, we recognize how strong these unifying forces are. We believe that we will accomplish more, and deliver more value for customers, by understanding and embracing these unifying forces rather than by fighting against them.
Our coevolution strategy has several major elements:
· Language innovation. Headliner language features (e.g., generics, LINQ) will be done for both languages, and done in a style that matches the host language. The languages will always be different – we will not try to make them “the same”. Instead, we will evolve them in the same direction, ensuring that both VB and C# developers can benefit from advances in programming models and API’s.
· Language tooling. Over time, we are evolving the language tooling so that customers of both C# and VB benefit from the same language tooling such as IntelliSense and refactoring features. We began this work in VS 2010. We made a lot of progress in this release, but are not 100% there.
· Samples and content. In general, we pursue parity for Microsoft samples and content. For better or worse, our Microsoft platform efforts are quite broadly distributed, and so there are sometimes shortcomings in this area. My team helps advocate for parity by working across Microsoft teams. We engage the VB community to help prioritize this work, so that we are spending our time and money most effectively.
I hope this is helpful context and background for our VB/C# coevolution strategy. Whether you are using VB, C#, or one of the other languages in our broad .NET portfolio of languages, we want you to understand what we’re doing (and why!) so that you can continue to use your language of choice with confidence. I’d be happy to answer any follow-up questions. Feel free to post questions or comments! | OPCFW_CODE |
How do you expand a tree while traversing it?
I'm making a program to solve a 3-puzzle(with 3 blocks and a blank) , which is a smaller version of an 8-puzzle. I'm attempting to construct a tree by shifting the blocks adjacent to the black into the blank space; thus every state can give 2 states(branching factor = 2). I'm using breadth-first search to solve the tree, but to traverse the tree, it first has to be made(expanded). Since i just can't continue expanding the tree forever i have to have some means of expanding the tree to a certain depth and then traversing it. So when the traversal reaches the last level, the expand() function would be called to expand it further.Can someone give me a clear method or algorithm to carry this idea out? Or is there another way to solve my problem?
@ tucuxi no, i said that it's a 3 puzzle, due to which the space can only be along a side(because there are only 4 squares). Therefore there can only be 2 blocks adjacent to the space.
(deleted my previous comment - you're right)
Keep a set of all the different board-states. Two board-states are different if they have a different piece (blank counts as a piece) in any of the positions. You can build a string to describe a state by concatenating all the digits using a consistent order; most languages/libraries support sets of strings directly.
You should only expand() non-visited board-states. Whenever you visit a state for the first time, you should add it to the "visited states" set. Before expanding any state, check to see if it is there already.
The full algorithm (for general breadth-first, no-duplicate search) is:
place initial state into "pending" (a queue)
place initial state into "visited" (a set)
while "pending" is not empty,
extract its first state, called "next"
if it is not present in "visited",
if it is the goal, report success, ending the algorithm
otherwise, add all its children at the end of "pending"
if you reach this point, there is no way to reach a goal state from a start state
How would i expand the tree? When a state has no children in the tree, should i simply pass the current state t to the expand(state t) function to create 2 new children for that node? or would it be a better idea to expand all the nodes at the last level at once?
Provide a "parent" field to each of your states (not used in comparisons of state equality). Whenever you add a child state to the queue, specify your current state as its parent. You will be building the tree as you explore it, just as you wanted. To find the optimal path to the goal, just follow the chain of parents from the goal state (until you reach the starting node, which has no parent).
| STACK_EXCHANGE |
helm: upgrade to networking.k8s.io/v1 ingress
Update Helm template files to fix the following warnings:
W0217 12:14:56.465497 1048297 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0217 12:14:56.849358 1048297 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Perhaps also upgrade minimal Kubernetes version to 1.14 at the same time; currently we declare:
kubeVersion: ">= 1.13.0-0 < 1.21.0-0"
Also seeing whilst running reana-client run-ci:
W0802 13:22:23.153278 12606 warnings.go:67] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
W0802 13:22:23.154223 12606 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0802 13:22:23.597494 12606 warnings.go:67] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
W0802 13:22:23.623856 12606 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
The changes may be needed in quite a few places:
$ rg v1beta
reana-workflow-controller/tests/test_views.py
1328: current_k8s_networking_v1beta1=mock.DEFAULT,
1370: current_k8s_networking_v1beta1=mock.DEFAULT,
1404: "reana_workflow_controller.k8s" ".current_k8s_networking_v1beta1"
reana-server/reana_server/status.py
410: "metrics.k8s.io", "v1beta1", "nodes"
reana-workflow-controller/reana_workflow_controller/k8s.py
21: current_k8s_networking_v1beta1,
90: api_version="networking.k8s.io/v1beta1",
307: "ingress": current_k8s_networking_v1beta1.create_namespaced_ingress,
347: "ingress": current_k8s_networking_v1beta1.delete_namespaced_ingress,
371: current_k8s_networking_v1beta1.delete_namespaced_ingress(
reana/helm/reana/templates/ingress.yaml
5:apiVersion: networking.k8s.io/v1beta1
7:apiVersion: extensions/v1beta1
reana/helm/reana/templates/cronjobs.yaml
3:apiVersion: batch/v1beta1
110:apiVersion: batch/v1beta1
reana-workflow-controller/tests/test_workflow_run_manager.py
27: current_k8s_networking_v1beta1=DEFAULT,
40: "current_k8s_networking_v1beta1"
54: current_k8s_networking_v1beta1=DEFAULT,
81: current_k8s_networking_v1beta1=DEFAULT,
93: "current_k8s_networking_v1beta1"
127: current_k8s_networking_v1beta1=DEFAULT,
reana-commons/reana_commons/k8s/api_client.py
45: if api == "extensions/v1beta1":
53: elif api == "networking.k8s.io/v1beta1":
64:current_k8s_networking_v1beta1 = LocalProxy(
65: partial(create_api_client, api="networking.k8s.io/v1beta1")
How to test this easily:
using last official Kind release (0.11.1), everything works fine
using latest Kind development version from sources installed via make build, you should see that REANA is not deployable there anymore due to the above v1beta1 and friends
MVP goal is to replace extensions/v1beta1 everywhere so that REANA would work on Kubernetes 1.22; you can develop on any Kind, but using Kind master would show the error easily
beyond MVP goal is to fix also batch/v1beta1 but this can in theory wait for Kubernetes 1.25 so we'd have time for this
beware of if api else api in r-commons as seen above
the changes should be tested not only for deployment, but also for running all workflows, and for spawning interactive sessions (jupyter notebooks) which will show the effects from r-commons
In addition to ingress.yaml, we will need to upgrade traefik to version 10.0.0+ if we want to support Kubernetes 1.22+. Currently, traefik version is 1.85.x (very old, from obsolete Helm chart repo) and can support Kubernetes up to 1.21 (source). If we will upgrade to traefik 10.0+, we will not support Kubernetes =<1.15, which looks fine be me.
@tiborsimko do we want to keep support for networking.k8s.io/v1beta1?
@tiborsimko do we want to keep support for networking.k8s.io/v1beta1?
We don't have to keep that support if we upgrade the declaration of the minimally-required Kubernetes version if the Helm charts....
In addition, we will need to upgrade the python-kubernetes client. The current version (11.x) doesn't have, for example, NetworkingApiV1.delete_namespaced_ingress method (extensions and v1beta has). The latest released stable version 18.x doesn't have it either. Only starting from version 19.x, delete_namespaced_ingress will be added to (source). There is alpha version of 19.x
What's the status of this task? Shall we try to merge it or close it? cc/ @tiborsimko
| GITHUB_ARCHIVE |
This is my response to being asked not to use functional programming techniques.
What I mean by “Functional Programming”
Different people mean different things by “Functional Programming”. Here’s what it means to me:
- Functions are first class, fundamental units of software, and proper functional programming languages allows passing functions as arguments, assigning them to variables, and composing them together to form new functions.
- Strong Static Typing prevents many common programming errors automatically.
- Together with sum types, strongly typed languages can automatically perform local totality checking, informing the programmer when a logical branch is not handled.
- Single assignment and segregation of mutable state: Mutable state is a major source of software errors. In most cases, its completely unnecessary. Immutable data types allow strong guarantees of thread safety, and leads to cleaner, more readable code.
Static vs. Dynamic Typing & Technical Debt
Statically typed languages have a [well deserved] reputation for being more difficult to get started with. When data types are decided at compile time, the compiler stops you from creating a nonsensical program. This does mean you need to apply more forethought to your code. Dynamically typed languages will happily allow you to run an incorrect program, and then throw a runtime exception when the nonsensical logic branch is reached. The question is: when do you want to grapple with the error in your thinking? At compile time, or runtime? Development or production?
The financial analogy is apt: would you prefer to finance your business by borrowing money, then slowly pay it back with interest? Or to pay as you go? There are appropriate uses of leveraged debt. Likewise, there is a appropriate place for dynamically typed languages: one-off programs and simple prototypes that need to be rapidly developed at the expense of quality and correctness.
Use the Best Available Tools
Functional Programming leads to simpler code that’s easy to refactor and maintain. When you need to change a datatype or function signature, the compiler can automatically find all the relevant code that needs to change. Some refactoring tools even allow you to automatically perform the changes. Its true that you can write bad code in any language, but that doesn’t mean that all languages are the same. Some languages make it hard to write good code, and others encourage good habits. The language you pick matters.
With the right choices, entire classes of common software problems can be solved. For example, null pointer dereferences can by virtually eliminated using Optional/Option/Maybe/Either data types to explicitly represent empty or error conditions.
Conversely, If you force engineers to use inferior tools, you can expect frustration and higher maintenance costs.
Complexity and Boredom
Writing software is a battle with complexity. We solve complicated problems that reflect a complicated world. But not all complexity is the result of modeling a complex problem domain. The distinction is between “Accidental” and “Essential” complexity.
Accidental complexity results from the arbitrary choices we make when approaching a problem. We might represent a collection using an array when a set or hash-map is more appropriate. Perhaps we’re using a language that requires manually memory allocation, when a garbage collected language would perform acceptably well. Accidental complexity can be addressed by choosing the right data structures, algorithms, and languages, if you know about them.
By contrast, “Essential” complexity is the result of grappling with the hard facts of the universe. Solving problems that emerge from the essential complexities of the world is exciting and fun. Solving problems caused by accidentally complex technical choices is tedious and boring.
Functional Programming is not a silver bullet. But it does eliminate several classes of accidental complexity that have plagued the software industry in recent decades.
Functional Programming is the Future
Traditional approaches to multi-threaded execution often turn into a nightmare of callbacks and deadlocks. However, using immutable data structures and work-stealing, many common operations like mapping can be done in parallel, with trivial changes to your existing (Functional!) code. Many aggregation & reduction operations can be done incrementally in a distributed fashion using monoidal data structures.
In some cases, significant speedups can be attained simply by switching from sequential arrays to a compatible parallel version of the same interface. The payback for Functional Programming is huge, and growing. Moore’s law is dead, and its not coming back. However, we’ve not nearly approached the physical limit of the number of cores we can pack into a single machine. The number of cores per CPU will probably continue to grow for decades. What’s more, massively distributed architectures are now available and commonplace, and the same techniques work there.
Embracing Functional Programming will help create a culture of learning. It will attract smarter, better skilled people, and allow us to reap the rewards of simpler, cleaner, more scalable code. Rejecting it will cost more in maintenance and higher turnover. | OPCFW_CODE |
Non-deterministic 2-tape Turing Machine that recognizes palindromes in linear time
This question is taken from an exam of a Computer Theory Course.
Describe how a NON-Deterministic Turing Machine with two tapes recognize in linear time palindrome strings with even length that have the form: $L=\{ww^R\mid w\in\{a,b\}^+\}$.
Tape 1: Read-Only & monodirectional
Tape 2: Read and Write, bidirectional
$w^R$ is the reverse of $w$.
My guess:
With determinism and $L = \{{wcw^R | w \in \{a,b\}^+ \}}$ I copy the input from tape 1 onto tape 2, then I check if the first part of the tape is equals to second part using two markers, $X$ for $a$ and $Y$ for $b$ to keep track of the current iteration.
In every iteration of the algorithm I check if there is a corresponding $a$ (or $b$) in the second part of the tape, reading the tape backwards.
In the last iteration, if i read only $X$ or $Y$ i accept, otherwise reject.
With non-determinism: i need to guess where is the center of the tape. One of the configuration in the computation tree would be $wq_cw^R$ where $q_c$ identify the state representing the string's center. Here i can do the same verification for the deterministic version of the problem.
Is this exam currently ongoing, or is this a past exam that you are studying from?
It is a question given 1 years ago
Do you have permission from the professor to post the exam question online for the whole world to see?
Hint. Suppose that, instead, the alphabet was $\{a,b,c\}$ you wanted to recognize
$$\{wcw^{\mathrm{R}}\mid w\in\{a,b\}^+\}\,.$$
How would you do that on a deterministic version of the machine you're trying to use? Now, in the real problem, nobody's telling you where the middle of the string is. How would you use nondeterminism to get around that problem?
i made some edits in the question's body after your hint. Thanks!
Well, that's fine but we're not here to grade your attempts.
What could happen if the string is odd?
@Jack Then no computation path will match the two "halves" of the string, since they have different lengths.
Reading your comment, another question comes to mind. What if i have to recognize all palindrome strings over the whole $\Sigma^*$ ? I just made another question here: Limited Turing Machine for Palindromes
It makes no difference what the alphabet is. To recognize palindromes over $\Sigma^*$, invent an arbitrary new symbol $\oplus$ that's not in $\Sigma$ and consider recognizing strings $w\oplus w^{\mathrm{R}}$ over alphabet $\Sigma\cup{\oplus}$. Then use nondeterminism to find the middle of the string, instead of the new character.
@Jack Oops, yes. I deleted the comment and reposted a correct version. (It was too old to edit but it's confusing to have incorrect comments.)
I still don't get it. If the first tape is read-only, do I have to copy the string two times on the second tape and add the $\oplus$ symbol to it? So for example
Tape1: ABBA$\square$
Tape 2: ABBA$\oplus$ABBA
Then use non-determinism on second tape?
Let us continue this discussion in chat.
| STACK_EXCHANGE |
MS SQL: How to Assign A Table Column to a Variable
I'm trying to assign a calculation involving two table columns as the value of a variable. I keep getting the error that the multi-part identifier could not be bound. My ultimate objective is to recreate the variable in a stored procedure because I would like to use that variable in other calculations in the same stored procedure.
My query is below.
USE dbAttendanceHR
GO
DECLARE @WorkDuration AS DECIMAL
SET @WorkDuration = CAST(DATEDIFF(minute, dbo.tblAttendance.ClockInTime, dbo.tblAttendance.ClockOutTime) AS FLOAT) / 60
SELECT @WorkDuration
The error message are...
Msg 4104, Level 16, State 1, Line 5
The multi-part identifier "dbo.tblAttendance.ClockInTime" could not be bound.
Msg 4104, Level 16, State 1, Line 5
The multi-part identifier "dbo.tblAttendance.ClockOutTime" could not be bound.
Please help. Thanks.
To access columns in a table, you must SELECT the rows that contain them using a SELECT statement. And since we can expect a table to have many rows, what value do you expect from your calculation - which appears to assume a single row? Did you intend to calcuculate this value for every row in the table?
Hello SMor. Yes I intend to calculate this value for every row that is returned by a query.
For every matching row in the table:
USE dbAttendanceHR
GO
SELECT CAST(DATEDIFF(MINUTE, a.ClockInTime, a.ClockOutTime) AS FLOAT) / 60 AS WorkDuration
FROM dbo.tblAttendance AS a
Unless this is a small table, you likely need a WHERE clause at the end to limit rows calculated to your desired scope.
You can include the ID column to identify each row.
You may also save the result to a table variable or a temp table for further processing. That part is not clear from your question, but is simple to include.
Does this fit your requests?
USE dbAttendanceHR
GO
DECLARE @WorkDuration AS NVARCHAR(max)
SET @WorkDuration = 'SELECT ' + 'CAST(DATEDIFF(minute, dbo.tblAttendance.ClockInTime, dbo.tblAttendance.ClockOutTime) AS FLOAT) / 60'
EXECUTE sp_executesql @WorkDuration
Hi Francesco. Thanks for your response. The variable is supposed to be a decimal type. Is there a reason why you changed it to text? Recall that I intend to use this variable in other calculations in the same stored procedure.
You code literally sets the value of the variable to: SELECT ' + 'CAST(DATEDIFF(minute, dbo.tblAttendance.ClockInTime, dbo.tblAttendance.ClockOutTime) AS FLOAT) / 60
Yes, what data type are you using for ClockOutTime ? I'm going to test it with some date on AdventureWork so we can be on the same page.
| STACK_EXCHANGE |
What is difference between UTF 7 and UTF-8?
UTF-8 is the most commonly used encoding format, popular in Web pages and many email programs. UTF-7 provides encoding for some email protocols that won’t work with UTF-8.
What is the difference between UTF-8 and UTF 16 encoding?
The main difference between UTF-8, UTF-16, and UTF-32 character encoding is how many bytes it requires to represent a character in memory. UTF-8 uses a minimum of one byte, while UTF-16 uses a minimum of 2 bytes. There are two things, which are important to convert bytes to characters, a character set and an encoding.
Which text encoding should I use?
As a content author or developer, you should nowadays always choose the UTF-8 character encoding for your content or data. This Unicode encoding is a good choice because you can use a single character encoding to handle any character you are likely to need. This greatly simplifies things.
What is UTF-16 encoding?
UTF-16 is an encoding of Unicode in which each character is composed of either one or two 16-bit elements. Unicode was originally designed as a pure 16-bit encoding, aimed at representing all modern scripts. UTF-16 allows access to about 60 000 characters as single Unicode 16-bit units.
What is the purpose of UTF-7 in Unicode?
UTF-7 (7- bit Unicode Transformation Format) is an obsolete variable-length character encoding for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in Internet E-mail messages that was more efficient than the combination of UTF-8 with quoted-printable.
What’s the difference between UTF-8 and ASCII?
It has an additional bit, compared to ASCII’s 7 bits, which allows for an increased number of characters it can handle. Adding another bit into the mix meant that UTF-8 could allow for more characters. However, a 1-byte code in UTF-8 is the same as the ASCII character set.
Which is the most popular UTF-8 character set?
These four UTF character sets are all referred to as encodings. Meaning, they are the tool that allows a user to request a character, send a signal through the computer, and be brought back as viewable text on the screen. The Unicode standard is implemented by encodings, of which UTF-8, UTF-16, and UTF-32 are the most popular.
What’s the difference between UTF8 and UTF-8 in Perl?
Things to remember 1 The :utf8 encoding, and variations on it without a hyphen, is Perl’s looser encoding. 2 Using UTF-8, in any case and with either a hyphen or underscore, is the strict, valid encoding and gives a warning for invalid sequences. 3 Only use the :encoding (UTF-8) and make its warnings fatal. | OPCFW_CODE |
This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
Where can i download 30 days marriage evaluation kit.
Produce,direct and act in a movie.
Joined: Dec 12, 2002
LIVING IN 2004 You know you are living in 2004 when... 1. You accidentally enter your password on the microwave. 2. You haven't played solitaire with real cards in years. 3. You have a list of 15 phone numbers to reach your family of 3. 4. You e-mail the person who works at the desk next to you. 5. Your reason for not staying in touch with friends is that they don't have e-mail addresses. 6. When you go home after a long day at work you still answer the phone in a business manner. 7. When you make phone calls from home, you accidentally dial "9" to get an outside line. 8. You've sat at the same desk for four years and worked for three different companies. 10. You learn about your redundancy on the 11 o'clock news. 11. Your boss doesn't have the ability to do your job. 12. Contractors outnumber permanent staff and are more likely to get long-service awards. AND THE REAL CLINCHERS ARE... 13. You read this entire list, and kept nodding and smiling. 14. As you read this list, you think about forwarding it to your "friends." 15. You got this email from a friend that never talks to you anymore, except to send you jokes from the net. 16. You are too busy to notice there was no #9 17. You actually scrolled back up to check that there wasn't a #9 AND NOW U R LAUGHING at yourself!!
author and deputy
Joined: Jul 13, 2001
Originally posted by sunitha raghu: AND NOW U R LAUGHING at yourself!!
At a recent computer software engineering course in the US, the participants were given an awkward question to answer: "If you had just boarded an airliner and discovered that your team of programmers had been responsible for the flight control software, how many of you would immediately get off the plane?" Among the ensuing forest of raised hands only one man sat motionless. When asked what he would do, he replied that he would be quite content to stay aboard. With his team's software, he said, the plane was unlikely to even taxi as far as the runway, let alone takeoff.
Joined: Dec 12, 2002
================================ SINGLE BLACK FEMALE seeks male companionship, ethnicity unimportant. I'm a very good looking girl who LOVES to play. I love long walks in the woods, riding in your pickup truck, hunting, camping and fishing trips, cozy winter nights lying by the fire. Candlelight dinners will have me eating out of your hand. I'll be at the front door when you get home from work, wearing only what nature gave me. Call (404) 875-6420 and ask for Daisy. ==================== Over 15,000 men found themselves talking to the Atlanta Humane Society about an 8-week old black Labrador retriever. | OPCFW_CODE |
Results 1 to 5 of 5
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Dec 2010
can't activate raid0 array after kernel upgrade
I've been trying everything I can to solve this problem, but so far no luck.
I am a total newb at this though, only been using linux as a main OS for a few weeks.
I'm running Arch linux, kernel 2.6.36 (it was working fine on 2.6.33)
My raid array is on the onboard intel controller (Gigabyte EX58-UD3R mobo). It also has a GSATA controller which I have tried as well but it's the same problem with that one.
It's a 2 disk raid0 array and I use a ~300GB partition of it as a system drive for my Windows 7 installation.
I can boot from the array and windows works flawlessly on it, but as soon as I boot into linux it gets all messed up. I can't activate with dmraid or mdadm and when I reboot from linux the array FAILS in the raid bios showing only one drive as a part of the array. (both drives are detected though, but only one of them shows "raid-active").
It starts to work again after cold-boot though, which is pretty weird to me.
It's always the same disk that fails too.
I included some info from my kernel log and fdisk
the disks appear like this
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb876c595 Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1c7fba32 Device Boot Start End Blocks Id System /dev/sde1 * 2048 206847 102400 7 HPFS/NTFS /dev/sde2 206848 552962047 276377600 7 HPFS/NTFS /dev/sde3 552962048 3907033087 1677035520 7 HPFS/NTFS
Dec 6 05:04:54 behemoth kernel: sde: sde1 sde2 sde3 Dec 6 05:04:54 behemoth kernel: sde: p3 size 3354071040 extends beyond EOD, enabling native capacity Dec 6 05:04:54 behemoth kernel: ata9: hard resetting link
Dec 6 05:04:54 behemoth kernel: sd 8:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Dec 6 05:04:54 behemoth kernel: sde: detected capacity change from 1000203804160 to 1000204886016 Dec 6 05:04:54 behemoth kernel: sde: sde1 sde2 sde3 Dec 6 05:04:54 behemoth kernel: sde: p3 size 3354071040 extends beyond EOD, truncated Dec 6 05:04:54 behemoth kernel: sd 8:0:0:0: [sde] Attached SCSI disk
now I added this to my mdadm.conf, not sure if it's right though.
ARRAY /dev/md0 devices=/dev/sdd2,/dev/sde2
I get the following output
mdadm: /dev/sde2 has no superblock - assembly aborted
But I don't know how I can fix it.
If someone could point me in the right direction or even explain some of those things in the kernel log it would great.
Thanks in advance
Have you added that disk before withCode:
mdadm --add /dev/md0 /dev/sde2
- Join Date
- Dec 2010
But I never tried mdadm before I started having problems, I always used dmraid.
Just did "dmraid -ay" and it was automatically activated and the partitions were created as /dev/dm-2 and /dev/dm-3 so I could mount them.
If I try that now I get:
/dev/sdd: "jmicron" and "isw" formats discovered (using isw)! ERROR: isw: wrong number of devices in RAID set "isw_ceijhcahbb_TERARAOD" [1/2] on /dev/sdd RAID set "isw_ceijhcahbb_TERARAOD" was not activated ERROR: device "isw_ceijhcahbb_TERARAOD" could not be found
sudo mdadm --create /dev/md0 -l0 -n2 -c128 /dev/sde2 /dev/sdd2
mdadm: super1.x cannot open /dev/sdd2: No such file or directory mdadm: /dev/sdd2 is not suitable for this array. mdadm: create aborted
am I doing it wrong?
But like I said in the first post, as soon as I boot into linux, the second disk in the array is deactivated in the raid BIOS so I'm certain there is some underlying problem here.
I don't know, I'm gonna get some sleep and hack around some more in the "morning" :P
edit: stupid me, I seem to have permanently borked the array somehow. Thank god for backups...
I created a new one and now mdadm seems to be automatically creating this device on boot called /dev/md127 1717.3GB of size and with no partition table.
Last edited by raginaot; 12-06-2010 at 08:26 AM.
Indeed, you use the fake hardware RAID of your motherboard. That's always delicate.
To rebuild your array you should go with dmraid -R. Something like that:Code:
dmraid -R isw_ceijhcahbb_TERARAOD /dev/sde2
But I'm not an expert here at all so I won't be able to give you further advice if that doesn't work.
- Join Date
- Dec 2010
I'm not entirely sure what I did, but I started by permanently ****ing up the array with some mdadm command, then I deleted it in bios, then I had to delete all the partitions in another windows machine cause the table was so majorly fubar linux thought one of the drives was several PB in size :S
Then I created a new array, a few reboots, banged my head on the keyboard for a while and sent Linus Torvalds an angry letter and somehow it's working now.
Cheers for trying to help manko. appreciate it | OPCFW_CODE |
atinkerersnotebook: 50 More Tips And Tricks For Dynamics AX (Tips & Tricks Volume #3) Now Available On Amazon
About a month ago I reached a milestone on my Dynamics AX Tip Of the Day blog site (www.dynamicsaxtipoftheday.com) and realized that I had compiled another 50 tips that I can share with everyone as a 3rd tip compendium. Over the past couple of weeks I have managed to sit down and organize them and am happy to say that this weekend I finished the job, and now the 3rd volume within the Dynamics AX Tips & Tricks series is now available on Amazon.
I was a little worried about this one because it is over 700 pages of tips and step by step instructions with screen shots that I have painstakingly compiled for you all. If you haven’t seen these books before, these tips are designed you the ones of us (myself included) that need to see how to do something rather than read instructions on how to do it.
Included in this compendium are the following tips & tricks for you all and hopefully one or two of these will make you even more of a Rock Star when it comes to Dynamics AX.
DESKTOP CLIENT TRICKS
SYSTEM ADMINISTRATION TIPS
If you want to check out the book on Amazon, then here are the links:
Remember: If you don’t have a physical Kindle then there is a desktop version, and a client for all of the tablets that allows you to read all of the kindle books. If you haven’t seen them before then here is a link: http://www.amazon.com/gp/feature.htm...cId=1000493771
Amazon Not Fast Enough? If you are in a region that is not sourced very well by Amazon, and you need a copy now then drop me a note and I can arrange for you to get the electronic copy other ways (as long as you promise just to share the knowledge, and not the file).
Also, don’t forget that there are two other Tips & Tricks volumes that you must have if you like this one. Here are links to their pages so that you can see the wealth of tips that are available:
Расскажите о новых и интересных блогах по Microsoft Dynamics, напишите личное сообщение администратору.
|atinkerersnotebook: Another 50 Tips & Tricks Dynamics AX 2012 Is Available||Blog bot||DAX Blogs||0||20.03.2014 00:11|
|atinkerersnotebook: Convergence 2014 Presentation Sneak Peek: 50 Tips And Tricks For Dynamics AX||Blog bot||DAX Blogs||0||20.02.2014 17:11|
|axnontechnical: Ideaca Dynamics AX Tips and Tricks - Turning off unwanted functionality in AX 2012||Blog bot||DAX Blogs||0||01.11.2012 04:22|
|Solutions Monkey: Microsofty Dynamics AX 2009 Enterprise Portal / Role Centers - Deployment Tips-n-Tricks - 2||Blog bot||DAX Blogs||0||30.09.2008 07:07|
|Опции темы||Поиск в этой теме| | OPCFW_CODE |
The Zcash Foundation is pleased to announce the release of the first stable, audited version of Zebra, Zebra 1.0.0.
What Zebra Does and Why it Matters
Zebra, the first Zcash node to be written entirely in Rust, can be used to join the Zcash peer-to-peer network, validate and broadcast transactions, and maintain the Zcash blockchain state in a more distributed manner. This alternative node implementation has been written from the ground up and avoids any technical baggage from the bitcoin legacy code. Diversity in node implementations and platforms help to strengthen the resilience of the network from targeted attacks affecting a particular codebase, programming language or operating system.
Building Critical Privacy Infrastructure in Rust: Modern, Memory-Safe
Zebra is developed in Rust, which is a memory-safe language, and thus less likely to be affected by memory-safety security bugs that could compromise the environment where it is run. Rust emphasizes performance and concurrency and these are qualities we have aimed to achieve in Zebra as well.
A New Paradigm for Privacy Infrastructure Development
Unlike zcashd, which originated as a Bitcoin Core fork and inherited its monolithic architecture, Zebra has a modular, library-first design, with the intent that each component can be independently reused outside of the zebrad full node. For instance, the zebra-network crate containing the network stack can also be used to implement anonymous transaction relay, network crawlers, or other functionality, without requiring a full node. With Zebra, we have redesigned our network layer to be fully compatible with Zcash and at the same time easier to maintain, more secure, and have better performance.
While this is a major milestone for Zebra, it’s just the beginning! There is a lot more to do before Zebra can fulfill all the use cases that the zcashd node is currently used for. We want to hear from the Zcash community regarding which use cases and functionality they would like to see prioritized for integration into Zebra. Please share your feedback and ideas on the Zcash Community Forum.
We would like to thank the following current and past ZF team members for their contributions to Zebra. Without their work and support we would not have reached this milestone today:
Josh Cincinnati, Antonie Hodge, Deirdre Connolly, Teor, Pili Guerra, Alfredo Garcia, Marek Bielik, Conrado P. L. Gouvêa, Gustavo Valverde, Arya Solhi, George Tankersley, Henry de Valence, Jane Lusby, Janito, Vaqueiro Ferreira Filho, Fungai Matambanadzo
We’d also like to thank the ECC team for their contributions to Zebra in the form of both code and code reviews, as well as the advice and support they’ve provided to the ZF team, and those community members who have contributed in various ways, including code, fixes, and typo corrections. | OPCFW_CODE |