Document stringlengths 395 24.5k | Source stringclasses 6 values |
|---|---|
I have been asked to give a lecture at the University of Houston Law school about PHRs and HIPAA.
I originally hooked up with the law program there because they publish interesting things on the collision of Open Source and Healthcare IT law, an issue that I care about. Now I am being invited to talk about PHRs, HIPAA and other interesting things at a Law class. When I was a student I loved it when a speaker brought notes, so that I could focus not on the information content of what he was saying but the validty of his arguments. Apparently (ironically really) I am qualified to talk about things that I blog about, so I wanted to point out some of the medico-legal topics I have covered in my various posts, in some kind of formal way. I hope this benifits others.
But first, I must invoke that wonderful acronym of amateurism IANAL.I am not a legal expert at all, no matter how smart I sound. This is OK because I am much less concerned with how the law does work, than with how the law should work. I think of the law as “applied moral philosophy”, which means that I can ignore lots of the legal issues especially when it is stooopid. When you think the law in a given area is stoopid, like our copyright law, (at least Colbert knows) then you respond with licenses that make some kind of sense, like the GPL or Creative Commons. I am not really an expert in these licenses either, but I am shocked at how often legal experts totally trash the concepts that our community was trying to protect when we wrote these licenses. For instance I have heard file-sharing compared to Creative Commons and Open Source as similarly respectful of copyright.
I care about Free and Open Source licenses in Healthcare IT. I also care about user agreements and PHR privacy statements. So lets dive right in.
First when everyone else was in an uproar about Google Health and Healthvault not being covered by HIPAA, I came to their defence. PHR systems should not be covered by HIPAA and that is a good thing.
I wrote an article on the difficulty of designing software around healthcare privacy laws.
I have written a pretty snarky little post on the definitions of the terms PHR/EHR/EMR, do not have much to say about that except that these terms are still abused by people who sell stuff. Its much more important to consider a feature set in when defining a term like EHR.
As I prepare for this lecture I wish I had written more on the “Robots attack” problem, where average people have unreasonable fears about technology, but I have talked some about how we focus on the wrong class of problems, with regards to security threats.
I have not yet talked much about the evils of health IT patents. But I should.
So hopefully, talking a look at all of this I should be able to come up with a good talk. | OPCFW_CODE |
Position will be located remotely, or hybrid if close to any one of our offices.
Software Developers serve as members of the software development team responsible for building high-quality, innovative applications that create a seamless software experience for our customers. A Software Developer’s role primarily involves building software by writing code, as well as modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. S(he) participates in the full development lifecycle and collaborates cross-functionally with the product, quality assurance and customer success teams to achieve simple, elegant solutions.
The Software Developer level II will be an experienced developer on the team with a minimum of 3-5 years’ experience developing high performing products. At this level, developers are expected to grow in their technical proficiency and understanding of the problem domain to move to the next level.
Univerus is an international software organization providing mission critical solutions to its customers across many public and private sector industries. As our operations expand, we are looking for motivated and qualified people who want to work for a fast-growing, exciting company. Do you want to be part of a growing company that provides many paths of opportunity and learning? If so, Univerus is the company you have been looking for.
Univerus offers a generous vacation and personal leave program, comprehensive health benefits that start on day one, flexible work options and a great environment in which to learn and grow, both personally and professionally.
Univerus is an equal opportunity employer.
Directly accountable to Manager, Software Development for carrying out all responsibilities as assigned. Serves as the primary contact for:
·Application software development
·Reports to Manager, Software Development
·Directs and participates in programming activities including researching, designing, implementing and managing software programs
·Participates in sprint planning, estimation and review
·Writes and implements efficient, testable and well documented code
·Integrate software components into a fully functional software system
·Creates unit tests to ensure class level code correctness
·Provides ongoing maintenance, support and enhancements in existing systems and platforms.
·Works closely with other developers, UX designers, business and systems analysts
·Incorporates a proactive approach to problem-solving as well as a detailed understanding of coding
·Works to deliver software using Agile processes
·Develops technical documentation to guide future software development projects
·Provides recommendations for continuous improvement.
·Maintain standards compliance
·Updates job knowledge by studying state-of-the-art development tools, programming techniques and computing equipment; participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations
·Work alongside other engineers on the team to elevate technology and consistently apply best practices.
·Protects operations by keeping information confidential
·Provides information by collecting, analyzing and summarizing development and service issues
·Utilizes Jira to document and estimate requests, ask questions and track work
·Recommend and implement cost saving initiatives for the organization
·Report on software development process and recommend enhancements to the process
·Works interactively with all staff
·Demonstrates leadership qualities to all levels of the organization
·Promotes an atmosphere of trust and respect
·Creates and promotes a corporate value system
·Provides feedback to development on each new software release
·Reviews software quality and its availability for client release
·Recommends changes in product requirements as necessary
·Delegates responsibility to staff as the organization grows
Reports and escalates to management as needed
·Ensure code quality throughout the development cycle
·Assists in improving operational performance
·Eliminate duplication of effort
·Maintains a cloud based back up of all application source code
Required Knowledge, Skills and Abilities:
Required: Minimum of 3 years working with C #
Preferable: Working knowledge of React
·Knowledge of the software development life-cycle.
·Working knowledge of Agile and Waterfall methodologies
·Must be a full-stack developer and understand concepts of software engineering.
·Ability to develop unit tests of code components or complete applications.
·Capable of delivering on multiple competing priorities with little supervision.
·Excellent verbal and written communication skills.
·Experience with test-driven development and automated testing frameworks.
·Experience with Scrum/Agile development methodologies.
·A methodical approach to planning and organization
·Able to exercise independent judgment and act accordingly
·Excellent analytical, mathematical, and creative problem-solving skills
·Logical and efficient, with keen attention to detail
·Highly self-motivated and directed
·Experience working in a team-oriented, collaborative environment
·Strong logic and critical thinking ability; experience and creativity in troubleshooting data and software problems
·Ability to share knowledge and work in a strong team-oriented environment
·Strong working knowledge of Microsoft Office and Smart Sheets
Education & Experience:
·Bachelor’s Degree in computer science or equivalent experience.
·Proven work experience as a Software Developer
·A minimum of 3-years’ experience developing software applications in an enterprise environment
·Excellent knowledge of relational databases, SQL and ORM technologies
·Experience developing web applications using at least one popular web framework | OPCFW_CODE |
In US, does an investigator always present the evidence of guilt of a suspect no matter what?
In US criminal justice system, must the investigator always provide the evidence that the particular suspect is the one who indeed perpetrated the crime, no matter what investigation procedures and assets were used(access to a particular computer system or software, access to encryption keys, IP logs, etc.)?
I ask such goofy questions because US has so many interesting laws that make nonsense to me and this is the reason why I ask some of the basics which should make sense in any justice system.
I'm not sure I understand the question. It's up to the prosecution to prove, beyond a reasonable doubt, that the accused committed the crime that they are accused of; which they must do by presenting evidence in court. (Unless the defendant pleads guilty, which is what usually happens). That evidence could take a wide variety of forms. But if they don't present any at all, then the defendant will surely be able to argue that there is at least a reasonable doubt that the crime was committed by them.
So typically, in the U.S., the investigators (police) will give the case to a prosecutor who will review the case and the evidence and decide if they will present the case to a judge and if so, what will be charged. The prosecutor is the ultimate authority of how to present the case to the court. The investigator(s) will likely be called to testify as their expertise's and actions that lead to certain evidence is required. If there is a certain expert in some type of evidence, that person may testify on conclusions of their findings (for example, if the coroner's report concluded that some was the victim of a homicide, the coroner who wrote the report will be called in to testify to his findings... but since the coroner did not have access to the Detective's witness, they wouldn't talk about the evidence with respect to that which came from eye-witness statements.
If you're basing you question off of TV or Film depictions of Court rooms, keep in mind that many court room scenes are not accurate and the investigator presenting all of the case could be a result of a limited cast, budget, or time for specialists to present the case. Some shows explicitly separate the roles (Law and Order and its spin offs are a one-hour shows, with half an hour being police procedural and the second half being a legal drama. CSI and its spin-offs would frequently bundle the CSI field workers as investigators for storytelling purposes (in the original show, most of the cast were not detectives and did not interrogate witnesses. Jim Brass was specifically the detective who affected the arrests, often having the CSIs explain the science. The show also never got to the trial phase, and in some cases, what if any crime was committed is never stated, as the story focused on getting to "how and why something happened" not the legal ramifications. In one episode, Grissom explicitly tells someone he doesn't have enough evidence to prove the fraud in criminal court... but the insurance companies will see his report and it's enough that they would be able to refuse to pay.
In shows that are not explicitly legal procedurals, it might be that the cast of characters is not sufficient to cast for all the witnesses AND/Or the characters are not in professions where they would know how to properly run a court. For example, in one episode of the Brady Bunch, the Brady kids, all of them are High School age or less, hold an ad hoc trial with Alice (the family's maid) as the judge. The case descends into childish bickering and Alice calls a recess when she realizes that her Roast is burning.
| STACK_EXCHANGE |
Capture all errors in a powershell script and email
I have a script with many sections such as below that runs nightly. I would like to get it to email any/all errors so i can be alerted and review the errors. I'm having trouble with the first step which is to capture all/any errors... I assume to a file which i could email or capturing to some kind of buffer that i could then email would be even nicer. Any help with both steps would be appreciated - especially the capturing part.
#---- Set Exchange archive licnse for all users with an Office license ----
Get-MsolUser -ALL | where {($_.Licenses.accountskuID -contains
"Tennant:STANDARDWOFFPACK") -and ($_.Licenses.accountskuID -notcontains
"Tennant:EXCHANGEARCHIVE_ADDON")} | Set-MsolUserLicense -AddLicenses
"Tennant:EXCHANGEARCHIVE_ADDON"
#-------------------------- ENABLE LITIGATION HOLD ----------------------
Get-Mailbox -ResultSize Unlimited -Filter {RecipientTypeDetails -eq
"UserMailbox"} | Set-Mailbox -LitigationHoldEnabled $true -
LitigationHoldDuration 2555
Have you checked Start-Transcript for the capture part ?
Error messages should automatically be captured in the $error variable; use $error[0] for the latest message.
You could then use that as the body for your email in conjunction with the Send-MailMessage Cmdlet
example:
$body = "";
foreach ($e in $error) {
$body += "<hr /><pre>" + $e.ToString() + "</pre><hr />";
}
Send-MailMessage -BodyAsHtml -Body $body -SmtpServer "smtp_server_address" -From<EMAIL_ADDRESS>-To<EMAIL_ADDRESS>-Subject "PowerShell Error Report"
Cool. Thank you.
The logging results show only part of the error like "licenses exceeded" but not other details like the command used "Set-MsolUserLicense". Is there a way to get more of the output so I have a better idea of which part of the scirpt may be failing?
The default $error output does show every possible thing that has gone wrong. The $error default output also only list a part of the error data. To get all the $error data, do this. $Error | Format-List -Force. Event still, you really need to enable script logging as noted above. Errors will be in several places, $Error, in the PoSH Transcript and in Windows Event logs - Application, Security and Windows PowerShell (which is a node under the Applications and Service logs).
The $error variable is just a collection of objects. My example only outputs a simple message as I used .ToString(). You can grab more detailed information from specific properties, or as @postanote noted, simply output all them in a list format
In your code, you need to trap errors in order to log them, even write to your own event log...
https://blogs.technet.microsoft.com/heyscriptingguy/2013/02/01/use-powershell-to-create-and-to-use-a-new-event-log
https://blogs.technet.microsoft.com/heyscriptingguy/2013/06/20/how-to-use-powershell-to-write-to-event-logs
... or write your own log function.
Example:
https://gallery.technet.microsoft.com/scriptcenter/Write-Log-PowerShell-999c32d0
Or, start with using PowerShell logging.
Example:
Enable logging in Group Policy
https://gallery.technet.microsoft.com/scriptcenter/Write-Log-PowerShell-999c32d0
Use PowerShell transcript
Example:
https://technet.microsoft.com/en-us/library/ff687007.aspx?f=255&MSPPError=-2147217396
Then have those stored in a central share that you can pull into an email as an attachment.
| STACK_EXCHANGE |
It’s really important to know your main r data types so you can check what kind of values you’re working with when modeling data, or when casting it as a certain data type. We'll discuss how to check numeric data types from integers to floating-point numbers, negative and positive numbers, as well as character/strings and logical data types.
What You'll Learn
> Basics of numeric and character data types and how to cast to a different data type
If you haven’t installed R and Rstudio already, you can watch "Getting started with Python and R for Data Science" video to get started.
For the dataset used in this exercise, download from here.
When working with tabled datasets, it’s good to know your data types of your atomic vectors or your variables or columns. The good thing about R, unlike most other languages, is that R will automatically infer the data types of columns when reading a data set into R, so you don’t need to manually tell R the data type for every single column but it’s really important to know your main data types so you can check what kind of values you’re working with when modeling data or when casting as a certain data type.
So, let’s first look at your numeric types, or numbers, that measure things in your data. You can get a whole number or an integer you can get a number with a fraction or a floating-point number A floating-point can be more than one number before or after the decimal point. Something to note when printing the result of a floating-point number, R usually rounds this up to five places after the decimal point. So, it won’t just print, you know, an infinitely long set of numbers after the decimal point. You can also have negative numbers and so the same basically applies. So, R classes all these kinds of numbers as one data type called numeric. If you use the class function here and you give it any of the numbers that we’ve mentioned, you’ll see that it’s classed it as numeric.
Another common data type you’re going to often see in many data sets is character. This could be a single character or it could be a string of characters. It could also be a number represented as a string. All these are classed as character in R so if we use the class function again and we give it any of the string characters you’ll see that it cast it as character. You can also use double quotation marks so that words with apostrophes are not incorrectly interpreted as single marks around a string. For example, “won’t” using an apostrophe. Another way you can write this is to use the escape backslash to read it as a literal apostrophe and not a quotation mark around a string For example, “won’t”.
The datatype character can be used for text strings or unique names of things otherwise they can be cast as factor levels or categories. Later in the video series, we’ll discuss the R object factor logical or boolean data types are also common so where there’s kind of a true or false value to the presence of something or not. For example, a variable on benign cancer might show its values as some people to be true in having this cancer and some people to be false in not having this cancer. So, if we input true or false into our class function here, you’ll see that it properly classes it as a logical data type and that’s it.
You now have an understanding of the main data types you’re likely to get in your data sets. In the next video, we’re going to cover variables.
Rebecca Merrett - Rebecca holds a bachelor’s degree of information and media from the University of Technology Sydney and a post graduate diploma in mathematics and statistics from the University of Southern Queensland. She has a background in technical writing for games dev and has written for tech publications.
© Copyright – Data Science Dojo | OPCFW_CODE |
Double spacing at the start of a sentence
So I recently found out this was a thing that people argue over... Some people were taught to type putting a double space at the start of a new sentence and others were taught just using a single space. The double space thing is supposedly a carry over from people using typewriters to mimic print styles that went out of fashion in the 50s.
I reckon I'm on the losing side as I still do it.
But on this forum my spaces are edited down to single spaces automatically. So you'd never know which side someone was on! Is there any reason the controversy is not as inane as I think it is?
Seriously tho, I can't think of any reason to do this. Then again, I also can't think of any reason to add "ugh" at the end of that word in casual use either, so I'm probably not the go-to guy.
It's a force of habit now, so I still do it, but I sincerely couldn't care less if anyone else does nor if websites automatically remove the extraneous space (most do just by default, you actually have to ADD whitespace-preserving code to your HTML form handlers to keep it.)
Edit: Ahhh wow, this fucking forum denies me my right to two space........... At least I have ellipses...
Imagine someone who doesn't care about driving never using their turn signal. They say "how the fuck cares?" they don't care about driving, they just do it to get to work and back.
Well, you may not care about typography, but you are using it right now as you type. If you are going to engage in a practice, and you know better, you have no excuse not to fuck it up when the results of your work will be inflicted on others.
Typography is one of those things that matters a fuckton, but is completely invisible to people who don't know about it.
Funny part is I've never even heard of this issue before this, I assumed everyone was double spacing after sentences. Even in this forum. Until I realized just now that it was auto-correcting that.
It does appear that on some cars directionals are an optional feature.
That article kinda shoots the turn signal argument down. it's aesthetics :-p So it's more about washing your car then it is using your turn signal...
Also the article points out some English teachers taught two spaces. which would explain why I've never had an issue before.
I was taught the single space was wrong in school in the late nineties. They should have made us aware of the controversy and let us make our own minds up rather than drilling it into us.
Edit: This wasn't in an english class. We learnt this in IT!
Thank you, Scott. :-)
"Every modern typographer agrees on the one-space rule. It's one of the canonical rules of the profession, in the same way that waiters know that the salad fork goes to the left of the dinner fork and fashion designers know to put men's shirt buttons on the right and women's on the left. Every major style guide—including the Modern Language Association Style Manual and the Chicago Manual of Style—prescribes a single space after a period."
I care about things that actually have an effect on readability and don't just tweak a highly specialized expert's eye.
This sort of obsessive goal displacement seems on par with the fashion show industry that has become about everything BUT what people actually wear.
If an author/editor doesn't re-quote (or whatever the industry specific term is) a block of dialog when a new paragraph starts, that bugs the shit out of me, but two spaces after a period is not going to hamper my absorption of the material.
The point is if that you are the kind of person who will refuse to follow the standard and spit in the face of a respectable and important art/science as typography shows what kind of person you are. If you didn't know you were wrong you should be happy that you have become enlightened, and should take the opportunity to improve yourself by following the rule.
To refuse to follow the standard, now that you can no longer claim ignorance, is the height of arrogance. You are knowingly and intentionally making your writing worse on purpose. Studies or not, you are not a typographer. You do not know better than "every modern typographer". To try to act like you do, or refuse to follow because you don't like it, is an anti-intellectual attitude. Perhaps the most reprehensible of attitudes. | OPCFW_CODE |
Xpath not function
Trying to pull some data from TfL status feed -
<LineStatus ID="6" StatusDetails="Minor delays between Acton Town and Heathrow/Uxbridge due to an earlier faulty train at Holborn. GOOD SERVICE on the rest of the line.">
<BranchDisruptions>
<BranchDisruption>
<StationTo ID="244" Name="Uxbridge"/>
<StationFrom ID="1" Name="Acton Town"/>
<Status ID="MD" CssClass="GoodService" Description="Minor Delays" IsActive="true">
<StatusType ID="1" Description="Line"/>
</Status>
</BranchDisruption>
<BranchDisruption>
<StationTo ID="284" Name="Heathrow Terminal 5"/>
<StationFrom ID="1" Name="Acton Town"/>
<Status ID="MD" CssClass="GoodService" Description="Minor Delays" IsActive="true">
<StatusType ID="1" Description="Line"/>
</Status>
</BranchDisruption>
</BranchDisruptions>
Simply using xpath=//@Name works to pull the data out of the Name element in our signage software but I don't want the node to show up if it's under the <BranchDisruption> parent. Have tried lots of combinations along the lines of,
xpath=//@Name[not(ancestor::BranchDisruption)]
but no luck. Does anyone have any suggestions?
There is no element with attribute Name under the parent
Write exactly what elements you want to select from the example
Hi splash58, sorry reading that back I left some info out. Where you have Name="Uxbridge", this is working for me (the code pulls the 'Uxbridge' out but I don't want it to work if it's under the tag (there was a lot more code on the page I copied this from).
Do you not want to select the 1 item after ?
Yes. There's lot of other code on the page where this works, it's only when under I don't want it to work. Thank you.
It is still unclear what you are asking. Show a complete, minimal, well-formed example of the XML input document and show (instead of describing) which nodes you would like to select from the document.
Maybe you need //*[position()>1]/@Name
@SimonCB One question : Are you sure that elements of type which have Name attributes like <StationTo ID="244" Name="Uxbridge"/> are immediate children of BranchDisruption always ?
Thanks everyone. Svasa's answer below does the job.
Based on the assumption that the elements which have attributes Name are immediate children of BranchDisruption always, I see the following xpath works :
//@Name/parent::*/parent::*[not(name()='BranchDisruption')]/*/@Name
Test xpath on below xml :
<?xml version="1.0" encoding="UTF-8"?>
<LineStatus ID="6" StatusDetails="Minor delays between Acton Town and Heathrow/Uxbridge due to an earlier faulty train at Holborn. GOOD SERVICE on the rest of the line.">
<BranchDisruptions>
<BranchDisruption>
<StationTo ID="244" Name="Uxbridge"/>
<StationFrom ID="1" Name="Acton Town"/>
<Status ID="MD" CssClass="GoodService" Description="Minor Delays" IsActive="true">
<StatusType ID="1" Description="Line"/>
</Status>
</BranchDisruption>
<BranchDisruption>
<StationTo ID="284" Name="Heathrow Terminal 5"/>
<StationFrom ID="1" Name="Acton Town"/>
<Status ID="MD" CssClass="GoodService" Description="Minor Delays" IsActive="true">
<StatusType ID="1" Description="Line"/>
</Status>
</BranchDisruption>
<Non-BranchDisruption>
<StationTo ID="284" Name="Heathrow Terminal 6"/>
<StationFrom ID="1" Name="Acton Town 7"/>
<Status ID="MD" CssClass="GoodService" Description="Minor Delays" IsActive="true">
<StatusType ID="1" Description="Line"/>
</Status>
</Non-BranchDisruption>
</BranchDisruptions>
</LineStatus>
It will give :
Attribute='Name="Heathrow Terminal 6"'
Attribute='Name="Acton Town 7"'
Amazing. Thank you for your help. This has worked. Much appreciated.
| STACK_EXCHANGE |
[Android] isRTLForced StrictMode violation on cold app start
[x] Review the documentation: https://facebook.github.io/react-native
[x] Search for existing issues: https://github.com/facebook/react-native/issues
[x] Use the latest React Native release: https://github.com/facebook/react-native/releases
Environment
OS: macOS 10.14
Node: 8.12.0
Yarn: 1.10.1
npm: 6.4.1
Watchman: 4.9.0
Xcode: Xcode 10.0 Build version 10A255
Android Studio: 3.2 AI-181.55<IP_ADDRESS>14246
Packages: (wanted => installed)
react: 16.6.1 => 16.6.1
react-native: 0.57.5 => 0.57.5
Description
We're having this violation every time when we cold start our app
StrictMode: StrictMode policy violation; ~duration=4094 ms: android.os.StrictMode$StrictModeDiskReadViolation: policy=65599 violation=2
at android.os.StrictMode$AndroidBlockGuardPolicy.onReadFromDisk(StrictMode.java:1440)
at java.io.UnixFileSystem.checkAccess(UnixFileSystem.java:251)
at java.io.File.exists(File.java:807)
at android.app.ContextImpl.ensurePrivateDirExists(ContextImpl.java:572)
at android.app.ContextImpl.ensurePrivateDirExists(ContextImpl.java:563)
at android.app.ContextImpl.getPreferencesDir(ContextImpl.java:519)
at android.app.ContextImpl.getSharedPreferencesPath(ContextImpl.java:714)
at android.app.ContextImpl.getSharedPreferences(ContextImpl.java:368)
at android.content.ContextWrapper.getSharedPreferences(ContextWrapper.java:167)
at android.content.ContextWrapper.getSharedPreferences(ContextWrapper.java:167)
at com.facebook.react.modules.i18nmanager.I18nUtil.isPrefSet(I18nUtil.java:97)
at com.facebook.react.modules.i18nmanager.I18nUtil.isRTLForced(I18nUtil.java:81)
at com.facebook.react.modules.i18nmanager.I18nUtil.isRTL(I18nUtil.java:48)
at com.facebook.react.uimanager.UIImplementation.createRootShadowNode(UIImplementation.java:122)
at com.facebook.react.uimanager.UIImplementation.registerRootView(UIImplementation.java:200)
at com.facebook.react.uimanager.UIManagerModule.addRootView(UIManagerModule.java:311)
at com.facebook.react.ReactInstanceManager.attachRootViewToInstance(ReactInstanceManager.java:1037)
at com.facebook.react.ReactInstanceManager.attachRootView(ReactInstanceManager.java:752)
at com.facebook.react.ReactRootView.attachToReactInstanceManager(ReactRootView.java:444)
at com.facebook.react.ReactRootView.startReactApplication(ReactRootView.java:300)
I'm not entirely sure why does it take that long to access SharedPreferences instance, might be due to a big SharedPrefs file. Would it be possible to somehow move it off the main thread?
Thanks!
Hey @rom4ek 👋
Thanks for taking the time to create this issue. This issue doesn't have a repro (which means, a react-native init-ed project with the minimal changes that leads to creating the same issue you are reporting). Unfortunately, I have no way of helping you in a meaningful way – there is no easy way for me to recreate the situation and check that the issue reported is still there when changing the code.
I'm not sure why this prefs file would take that long to read and as far as I can tell I don't think it's a file size issue; looking at the source this is a limited prefs file for a small number of items.
Because of this, we are going to close this issue - but if a repro is shared, we are happy to reopen it 🤗
| GITHUB_ARCHIVE |
It's Diverge then Converge for Innovation
The mind reels. Literally my mind reeled, and spun so much I was almost physically sick. There are so many things wrong with this communication and expectation setting that I barely know where to begin, but let's pick a couple, shall we, so you don't encounter the same fate.
Narrowing the Aperture
While I understand that executives sometimes need to have things condensed in communication since they are busy and bombarded with emails, innovation is one of those topics that deserve a more robust discussion and should not be simplified. If we are going to do something new and interesting, we need to explain exactly what our goals are and how interesting and disruptive the work will be. Do you think Columbus asked to sail east by simply saying he'd "go another way". No! He had to work to convince Isabella and Ferdinand of his ideas and his intent. He had to paint a picture of what he was certain would happen, in the face of constant ridicule.
When we shortchange the conversation with executives about innovation, when we limit the scope and downplay the impact, we set expectations far too low. This makes it hard to get anyone excited about the possibilities, and difficult to ask for more funds or resources to do good work. By narrowing the conversation, we've limited our possibilities and outcomes, or we have to work to recover by holding more conversations with the executives to broaden our definitions and their expectations. It's never easy to expand expectations and definitions once they've been set or narrowed.
Using synonyms for the real thing
I hate the fact that idea generation or brainstorming is in some cases a synonym for innovation. These words are too simplistic and are in fact nested. Brainstorming is one way to generate ideas. Generating ideas is one step in an innovation process. To equate innovation with brainstorming is similar to equate getting an A in one collegiate course with getting a degree. Yes, it is a step, and an important one, but simply one step in a more holistic process.
When we describe "innovation" as idea generation, or equate brainstorming with innovation, we cheapen the words and reduce the possibilities and potential experience and outcomes. Those who follow this blog know that innovation involves defining challenging issues or emerging opportunities, researching trends to discover future needs and market conditions, researching and understanding unmet needs, generating ideas or discovering new technologies and, finally, selecting the best solutions to achieve your goals. Talking about idea generation or using brainstorming as a synonym for innovation is far too simplistic, and narrows the work and expectation, and the amount of time people are willing to invest or commit resources.
Short term actions versus long term change
Finally, this short communication is meant to reassure an executive that what we are about to do can be done quickly with only short term ramifications. Sorry, but you can't build innovation capabilities or competencies in the short run and sustain them. Innovation, like any other skill or capability must be regularly exercised or we lose those skills. Equating innovation with a one time brainstorming effort condenses the work and creates the expectation of a discrete, one time event, not the development of skills and knowledge that will be consistently deployed over time. You can't "do" innovation once and do it well. You need to build skills and constantly exercise those skills, for two reasons. First, without consistent innovation your firm falls behind competitors. You simply can't innovation occasionally and win. Second, your team loses faith in innovation as a toolset and the skills atrophy if they aren't used regularly. If you want sudden, rapid and occasional innovation, use a consultant. If you want to build a sustained capability, train the teams, deploy the teams on innovation activities regularly and set the right expectations.
Communication and Commitment
Look, I know it's hard to ask for the level of commitment required to do innovation well. Most firms have invested so much in their status quo systems and processes, and have so little extra bandwidth that innovation is going to cause disruptions internally if it is done well. And no one wants to disrupt the status quo - they want to add a bit more innovation on top of a perfectly functioning organization. You can't layer on innovation without building skills, consistently innovating over time, and that takes time and commitment.
You'll be much better off setting the broadest possible expectation early through good communication and by building the rationale for innovation. Setting the scope or expectation too low early means incremental innovation at best, which will not satisfy and will be quickly terminated when it doesn't deliver, or asking for more funds and resources after the work starts, which is never welcome. Ask for everything up front, accept that you'll need to start small. Demonstrate the broad scope and opportunity early so that executives understand the potential depth and breadth of innovation, and once you've established the potential scope it will be OK to start out with a smaller scope to prove the value and build your skills and capabilities to the larger scope you've identified. But please, be careful with your language, set the right expectations and don't limit your scope too early.
If you understand innovation at all, you'll know the concepts of divergence and convergence. It's much better to start with divergence, and then converge as necessary, than to converge at the beginning. Once you've done that, it's exceptionally difficult to diverge afterwards. | OPCFW_CODE |
In this guide, we'll go through the installation of the LMS365 app in Microsoft Teams, provide steps of how to find the LMS365 bot in Microsoft Teams Store, how to add and pin the LMS365 app to the Microsoft Teams bar, and what actions to take when the LMS365 bot doesn't respond or you get an error message.
Add the LMS365 app to Microsoft Teams
The Microsoft 365 global admin and Teams admin has access to the Manage apps page in the Microsoft Teams admin center. Here, they can set up app permission policies, app setup policies, and custom app policies and settings to configure the app experience for users and choose which apps that will be installed by default for users when they start Teams. See Microsoft's documentation for more information.
To add the LMS365 app to Microsoft Teams, you first need to have a working LMS365 installation, then follow these steps:
1. In Microsoft Teams, select the Apps icon at the bottom of the left side navigation. In the search field enter LMS365.
2. Select the app and choose Add.
If you want to add the LMS365 app to a specific team or chat, select Add to a team or Add to a chat. This action will also add the LMS365 app to the navigation bar of Microsoft Teams.
Customize the LMS365 app for Microsoft Teams
In the Microsoft Admin Center, the Microsoft Teams or Microsoft 365 global admin can customize the Short name, Short Description, and Full description of the LMS365 app for Microsoft Teams.
In this way, you can name the app in alignment with the name you've chosen for your learning system and guide users with an app description of your choice.
To customize the LMS365 app:
1. In the Microsoft Teams admin center, select Manage apps > LMS365.
2. On the opened side bar, under the Details section, complete the fields for which you want to add customization: Short name, Short description, or Full description.
3. To save the customized changes, select Apply. To cancel the action, select Cancel.
Allow users to communicate with external users in the LMS365 app in Microsoft Teams
To enable users in your organization to communicate with Microsoft Teams users outside of the organization, the option External access settings should be set up for this in the Microsoft Teams admin center.
To do this, go to the Microsoft 365 admin center > scroll the menu on the left to Admin centers > select Teams > you'll be brought to the Microsoft 365 admin center/Dashboard > on the left-side navigation select Users > External access > enable the toggles to allow users to communicate with other Teams and Skype users.
For more information see Microsoft's documentation on how to manage external access in Microsoft Teams.
Pin the LMS365 app to the Teams app navigation bar
As a Microsoft Teams admin or Microsoft 365 global admin, you can customize the view of Microsoft Teams in your organization and set policies to pin the LMS365 app to the taskbar for all of your users automatically.
To pin the LMS365, navigate to the Microsoft Teams admin center, go to Teams apps > Setup policies.
- Select Global (Org-wide default)
- Turn on User pinning.
Under Pinned apps, select Add apps.
In the Add pinned apps pane, search for the apps you want to add, and then select Add. You can also filter apps by app permission policy.
- Under the App bar, arrange the apps in the order that you want them to appear in Teams.
Once all steps are done, users will see the icon of LMS365 in the app navigation bar of Microsoft Teams, and the LMS365 app for Microsoft Teams will be available to users.
"An administrator has set a policy..." message
It happens that users could see "An administrator has set a policy that prevents you from granting LMS365 API the permissions it is requesting. Contact an administrator who can grant permissions to this application on your behalf" message when adding the LMS365 bot.
That means that the option Allow user consent for apps is disabled in Microsoft Azure. To enable consent, log in to the Microsoft Azure Portal, and then go to Azure Active Directory > Enterprise Applications > Consent and permissions and select the option Allow user consent for apps.
LMS365 isn't responding
A small number of Teams are seeing LMS365 not responding. The likely case has been the following: your Office 365 admin has disabled bots for Microsoft Teams. Contact your Office 365 admin to get the issue resolved.
Article is closed for comments. | OPCFW_CODE |
It turns out that leaving debugging code in your project can sometimes be a really great thing. I was going to video the axis mechanism and see if I could spot anything odd happening that might explain the drift, when suddenly during moving it into position I got a couple of these though the serial port:
[rogue axis 1 ]This was the warning I put in to check if the servo was overshooting at all (axis 1 is the X axis). It turns out it does overshoot, and my limited earlier tests didn't pick it up because it doesn't happen that often.
I got all excited and put code into the Arduino sketch to track the movement past when it is supposed to be moving, and adjust the internally recorded position accordingly. Then I tried to cut another shape, and there was still drift.
Once I got over the disappointment I decided to record the axis anyway to see what happens in more detail when it drifts. I marked the side of the linkage with a black dot so that I could easily spot when it went out of alignment. Then I set the video recorder going and had it move in tiny circles (2.4mm or 2 complete turns in diameter) over and over again until I spotted the black dot lose alignment. Then I stepped through the frames trying to count along the steps taken and see what happened:
There is this interesting little bit where it backtracks as if it thinks it has overshot by one (when it hasn't). The problem is though that the count on the left and the count on the right should match up and they don't. Even if the extra tick from thinking it has overshot is added it ends up one short. Without it ends up two short (which is the actual physical amount it ends up off by).
The debugging trace for the red bit looks normal, but the blue bit looks like this:
--curved line--- from x: -24 y: 0 z: 0 to x: 0 y: 0 z: 0 center x: -12 y: 0 z: 0 steps 34 c1-132 c0 -100 c0 -117 c1 -173 c1 -129 c0 -98 c0 -113 c0 -155 c1 -152 c1 -112 c0 -75 c1 -143 c0 -62 c0 -120 c1 -75 c0 -93 c0 -83 c1 -107 c1 -60 c1 -103 c1 -111 c1 -90 c1 -124 c0[rogue axis 1 ] -82 c1[rogue axis 1 ]-171 c0 -167 c0 -167 c0 -185 c0 -139 c0 -128 c1 -131 c0 -124 c0 -101 c1 -107 ENDED AT: 0 y: 0 z: 0 DONEWhich is interesting because there are two points where it thought that the servo had overshot (the ones that say "[rogue axis 1]"). I only see one spot where it backtracks on the video though. If the backtrack was the first point we see in the data, the second would be during the backtrack which would put the count off by two. If however the first point it overshoots doesn't backtrack and the second does, that makes the count correct, so is the only theory that makes sense to me unless there is a software bug somewhere.
What is clear from all this though is that the switch is detecting extra presses where none are happening. I think my next step is going to be programming in a way to measure more precisely the interval between switch clicks on a specific axis and when they happen related to servo commands so I can get a better idea exactly what is happening. | OPCFW_CODE |
Deep learning is a rapidly growing technology that has revolutionized the machine learning industry. Using deep learning, powerful predictive models can be created from large data sets. KNIME is a powerful platform for data analysis and machine learning, and recently has begun to provide native support for deep learning. With this support, users can now build powerful deep learning models with ease. This article provides an overview of deep learning with KNIME, exploring the benefits and features it offers for machine learning practitioners.
Introduction to Deep Learning with KNIME
Deep learning is a subset of machine learning, which focuses on using algorithms to learn from data. It is based on artificial neural networks and has been shown to be effective in tasks such as image classification and natural language processing. With KNIME, users can now create powerful deep learning models with ease. KNIME provides support for popular deep learning frameworks, including Keras and TensorFlow, as well as its own deep learning nodes. In addition, KNIME also offers an intuitive graphical user interface (GUI) to help users quickly get started with deep learning.
KNIME’s deep learning nodes come with pre-trained models, which can be used to quickly build models for applications such as image segmentation, object detection and natural language processing. Additionally, users can also design their own deep learning models for any desired task. Finally, KNIME’s GUI provides users with a variety of tools to visualize the deep learning models and the datasets used to train them.
Exploring the Benefits of Deep Learning with KNIME
KNIME’s deep learning capabilities provide several benefits for data scientists and machine learning practitioners. Firstly, KNIME’s deep learning nodes come with pre-trained models, meaning that users can quickly get started with deep learning without having to code complex models from scratch. Additionally, KNIME also allows users to design their own deep learning models, giving them more flexibility and control over the models they create.
Furthermore, KNIME’s deep learning nodes provide users with an intuitive interface to visualize the models they create. This helps users to quickly identify patterns in the data and understand how their models are performing. Additionally, KNIME also provides users with a number of tools to evaluate the performance of their deep learning models, helping to ensure that they are accurately predicting the desired outcomes.
Finally, KNIME’s deep learning capabilities allow users to easily deploy their models into production. This enables users to quickly put their models into use, generating valuable insights from their data.
In conclusion, KNIME offers a powerful and intuitive platform for deep learning. With its pre-trained models, graphical user interface and deployment capabilities, KNIME provides users with an easy way to quickly get started with deep learning and create powerful predictive models from their data. With KNIME, machine learning practitioners of all skill levels can quickly and easily build deep learning models and gain valuable insights from their data. | OPCFW_CODE |
feat: Flatten icons
Related to #1687.
What is the purpose of this pull request?
[ ] New Icon
[ ] Bug fix
[x] New Feature
[ ] Documentation update
[ ] Other:
Description
This PR modify the package build scripts to flatten the exported SVG icons.
While source icons are made up of several shapes (<line>, <rect>, <circle>, etc), all the shapes now get converted to <path> and merged together in a single <path> element during the build.
As mentioned in #1687, this has two benefits:
Overlaps are no longer visible when using semi-transparent colors.
The size of the build output is decreased by ~13%.
Implementation
We use the Convert Shape to Path and Merge Paths SVGO plugins to convert shapes to paths and merge the paths together. Currently SVGO cannot optimize rectangles with rounded corners (svg/svgo#1963), so this PR includes a custom plugin to do that.
Impact on icon size
In most cases, converting shapes to <path> reduce the icon's size, especially for icons made up of many shapes (e.g. sliders-horizontal.svg). In some cases, we observe a small increase of the size (e.g. layout-grid.svg), because circles and rounded rectangles are more compact than paths. Overall the output size is reduced by ~13%.
Impact on the build time
One would assume that processing each icon for every build target would be bad for the build time. However, I haven't observed any significant increase of the average build time on my machine (it actually went slightly down).
Impact on icon appearance
Icons should appear the same. However there are two noticeable differences.
When using a semi-transparent stroke color, the icon now has a consistent appearance instead of showing overlaps.
Edges are a little bit smoother in places where several shapes converge. This isn't visible to the naked eye.
Before Submitting
[x] I've read the Contribution Guidelines.
[x] I've checked if there was an existing PR that solves the same issue.
This is pretty breaking changish.
Any chance this one will be merged/published before 1.0?
@wellguimaraes: Not sure if we should do this before v1.0, as @jguddas has stated this could potentially break a lot of apps that currently rely on the unflattened paths (e.g. to animate or fill only certain parts of icons). 🤔
Don't close.
Idea
Merge all the paths that are overlapping into a single element, keep everything else that is not connected separate.
That way we solve the transparency issue, but the user can still color/fill different sections of an icon.
We also have to be careful that we don't break people by changing how we fill icons.
If we were to keep the plus and circle separate and sort the segments by bounding box big -> small it would continue to work.
While detecting and merging overlapping shapes would be a nice compromise, I have two objections.
How doable is it?
Detecting overlapping shapes is non trivial. SVGO won't do it for us, so that would require quite a bit of work. If someone is motived to work on the feature, that's great. But I have neither the time, the inclination, nor the competences to do it myself.
Does it serve a real purpose?
Yes, exposing the SVG icon as a list of distinct shapes may allow users do implement custom behaviors such as applying different colors. But are there people currently doing that? I'd be interested to hear from them.
IHMO, trying to treat certain parts of icons differently is already taking the risk to experience a breaking change anytime you update the package, regardless of the build process.
In order to apply specific colors to specific shapes, you need to know the order of the shapes in the SVG, which is not guaranteed to remain the same. In fact, of the 22 icons that have been modified in the past three weeks:
9 have reordered shapes (calendar-minus, calendar-search, eye-off, image-plus, map-pin-off, replace, replace-all, square-check-big, view)
4 have merged paths together or moved lines/arcs from one path to another (cloud-download, dog, ribbon, skull).
Anytime an icon is updated, it may break any assumption one may have about it other than "it should still look mostly the same (assuming there wasn't an issue in the original icon that we had to fix)". So, as long as updating the build doesn't significantly alter an icon's appearance, I don't think it is more of a breaking change than any other update. The scope may be larger but the impact on individual icons is the same.
How doable is it?
I can whip something up for that, we already have custom sorting and optimization logic.
Does it serve a real purpose?
I need some idiomatic way to sort things for my custom optimization code to work properly i.e. we already have and need sorting.
The current code is less than perfect (see #1938).
Anytime an icon is updated, it may break…
I really want to actually support this.
My proposal would be to merge touching paths and sort from smallest to biggest bounding box.
I think I was a little bit too pessimistic and spoke too fast.
I can whip something up for that, we already have custom sorting and optimization logic.
I had a quick look at paper and path-intersection and it seems like it shouldn't be too complicated to use either package to detect path intersections. I initially thought we'd have to convert stroke paths to shapes. However, if we can assume that two paths either touch/intersect or are far enough to not overlap regardless of the stroke width (as per the design principles), we can simply treat paths as width-less.
My proposal would be to merge touching paths and sort from smallest to biggest bounding box.
I probably need to reconsider how the optimizations are run, then.
My idea was to run the optimizations during the build process, because it removes the need to update individual icon files, and icon designer would remain free to organize their shapes however they want.
However, I understand that you want to enforce a canonical order in the pre-commit optimizations, in order to keep the icon structure stable, prevent filling larger shapes from hiding smaller ones, and allow users to make some assumptions about the icons.
Merging shapes after they've been sorted may go against some of those goals. Maybe I should keep the optimizations out of the build process and put the optimization pass in the pre-commit optimizations.
Suggestion
Revert all the changes (in particular the change to run optimizations during the build process) and add the optimization pass that you suggested: For each shape, test whether it intersects any other shape. If it does, convert both to <path> (if needed) and merge them together. If it doesn't, keep the shape as it is (no conversion to <path>).
Cons:
The icon files have to be updated.
The icon files get harder to read for humans.
More constraints for the icon designer.
Pros:
Solves #2136 while modifying icons as little as possible.
Easier to implement specific treatment for specific cases if needed.
Canonical order is preserved.
No surprise. What you see is what you get.
Running into this issue now, why was this closed?
| GITHUB_ARCHIVE |
Join types are many to the keys relations reworked a generic example
You want to be used, your own relevant entities. Hibernate Tip Best way to remove entities from a many. For instance a Country can only have one UN Representative, and operations to manipulate that array. We share your schema model below, hard for saving all common data is this redundancy is used approach would be other model uses a sql schema represents each task. Package including expression functions like desc and sqlalchemysqlfunctionsfunc. Give examples of each.
Multivalued attributes can have more than one values. ManyToMany Relationship Doctrine Collections. Any relational databases limit statement is straightforward on whether they most or hear cardinality. A one-to-many relationship has high cardinality at only one side of the relationship It is represented in the GraphQL schema by two types where the source has. Author_ID columns to the Books table, and, each cycle may represent a constraint. Airtable's guide to many-to-many relationships Airtable Support.
If you can design based on pure dax puzzle is use? Are groups and company code separate concepts? We setup as it would not avoided and quarter pair of relationship is consisting of solid circle. You can look into Firebase Function Database trigger to solve this problem.
An erd as another name, its visibility into three. How to Write Many-To-Many Search Queries in MySQL and. It rsc as many relationship types of the concept of the second example given tables to your sql? Matches the relationship to many relationships, computer science and inheritance mapping in a bit more cumbersome from the entire set of space by can reference? In some situations, a row in one table has many related rows in a second table. Use many to relationship that.
Converting E-R Diagrams to Relational Model.
Can only problem: to many relationship is always have defined in
The granularity of the bridge table is the day. Many-to-Many Relationships Sisense Documentation. Secondary index is embedding might see on many to many relationship sql schema also additional fields. Although you can own header files should be both sides of our service annotations in schema to a database software designers should consider the name as this. This type of table usually connects foreign ids to each other.
Diagramming symbols and relationship to
ID field will still produce a lot of redundant data. Airtable is as easy as linking two tables together. In your custom coding or more employees document that happens in schema to many relationship assignment. In the third and final article of the series Shel Burkow demonstrates several designs that cover many-to-many relationships attribute closure. In the one to one relationship Embedding is the preferred way to model the relationship as it's more efficient to retrieve the document One-To-Many 1N The 1N. Thank you for subscribing to our newsletter!
You can use to many relationship
However, one class is consisting of multiple students. Of course this kind of result is usually not useful. What is blocking those null when using an er model. An order can be able to suit a list price and types and design called an example, an employee table. The key to resolve mn relationships is to separate the two entities and create two one-to-many 1n relationships between them with a third intersect entity The. The documentation is often too dodgy, our site will have many users, it does matter. Relation schema of sales and customers tables with line connecting but what. Following SQL statements will be emitted on Python console.
But not changed, the relationship between more of embedding logic
We examine how data model as you need products tables. Saves disk space by eliminating redundant data. This attribute or more than five teams it would not have duplications in database, or go ahead and not. Separate dimension table relationship to analysis services by a comma instead, the students table and adapts to understand exactly what levels. We will now consider relationships of different kinds between these entities. This one set in sql to schema into the decisions made during physical database? Database Relationships.
After testing the sql to many relationship
Remember is whether a sql schema, sql databases work? This opens up your schema can be derived attributes. How many tables does a many to many relationship? We can i said to use methods of result will develop programming and to many relationship into one. Version IV above: Is there such a case that student A works as TA for student B in one course while student B works as a TA for student A in another course? Before delving into designing a schema lets look at the properties that makes. The map it is double diamond shapes to growth in sql schema you give examples. Every table has a primary key column that uniquely identifies the data in the table. Some attributes exist at the intersection of other indirectly related attributes. Probably the schema for these two tables would be like. Now explore these tables interact, sql to many relationship?
Would not mandatory nature of sql hints, sql schema but there are times, so you made multiple entries for.
MySQL Workbench Manual 9141 Adding Foreign MySQL. It difficult for ordering a sql schema but not. In this function, but it is made up of two attributes. The erd diagram for this means structuring data as more listings for instructions, this relationship diagram above is difficult for each form. These relationships can be modeled and conceptualized like traditional attributes but like facts they exist at the intersection of multiple attribute levels Many. Create a many to many relationship using just SQL Setup SqlAlchemy so that it. An error has occurred.
Database Schema and Relationship Types. Victim Margraves Impact. | OPCFW_CODE |
Are Jurors informed about jurry Nullification
Are jurors in US criminal trials informed that they can rule contrary to the law as given by the Judge (aka "jury nullification") by the judge or by any official source? What happens when juries do "nullify"?
In general they are not told. In fact, I am not aware of any jurisdiction where they are told by the judge officially. Judges will normally charge a jury that they must accept the law as stated by the judge, and ignore any other source of the law, whether they like it or not. But the Judge has no way to enforce such a charge.
According to the Wikipedia article
The 1895 decision in Sparf v. United States, written by Justice John Marshall Harlan held that a trial judge has no responsibility to inform the jury of the right to nullify laws. It was a 5–4 decision. This decision, often cited, has led to a common practice by United States judges to penalize anyone who attempts to present legal argument to jurors and to declare a mistrial if such argument has been presented to them. In some states, jurors are likely to be struck from the panel during voir dire if they will not agree to accept as correct the rulings and instructions of the law as provided by the judge.
A 1969 Fourth Circuit decision, U.S. v. Moylan, affirmed the power of jury nullification, but also upheld the power of the court to refuse to permit an instruction to the jury to this effect.
We recognize, as appellants urge, the undisputed power of the jury to acquit, even if its verdict is contrary to the law as given by the judge, and contrary to the evidence. This is a power that must exist as long as we adhere to the general verdict in criminal cases, for the courts cannot search the minds of the jurors to find the basis upon which they judge. If the jury feels that the law under which the defendant is accused, is unjust, or that exigent circumstances justified the actions of the accused, or for any reason which appeals to their logic or passion, the jury has the power to acquit, and the courts must abide by that decision.
Nevertheless, in upholding the refusal to permit the jury to be so instructed, the Court held that:
…by clearly stating to the jury that they may disregard the law, telling them that they may decide according to their prejudices or consciences (for there is no check to ensure that the judgment is based upon conscience rather than prejudice), we would indeed be negating the rule of law in favor of the rule of lawlessness. This should not be allowed.
It is not so much that jury nullification is a right of the jury, as that there is very little right for the prosecutor or judge to inquire into why the jury acted however it did. If there is a suspicion that the jury was bribed, or influenced by prohibited communications, that can be looked into. But otherwise a jury is like an oracle, its actions have no specified reason or justification, they are whatever they are.
The judge (or an appeals court) can set aside a jury verdict on the grounds that no rational jury could find in a particular way -- this is mostly used to overturn convictions based on insufficient evidence. But a jury has almost total freedom to believe or disbelieve any witnesses, so if it disbelieves, it could acquit, regardless of whether it rejects the law under which charges are brought. So there is no way to tell if a particular verdict was based on nullification, or on disbelief of the witnesses, or some other possible ground, without asking the members of the jury about what happened during deliberations, or why they acted as they did.
In any case, there is no provision -- that I know of -- to set aside a jury verdict on the grounds that it was an instance of nullification, so inquiring into whether it was would be of little point.
This attitude toward jury verdicts goes back to the very early origins of trial by jury, when it was a replacement for Trial by Ordeal. The Ordeal had been considered a way of asking God to decide the issue, and there was no way to ask God to clarify the decision. When it was replaced by jury trial, no way to ask for clarification was considered possible there either -- the jury was said to voice the decision of the community at large: the formal term for jury trial was "to be tried by the country". See C. Rembar's The Law of the Land and H. C. Lea's The Duel and the oath for more on this history.
This article reports on recent cases where juries have refused to convict in Marijuana cases.
| STACK_EXCHANGE |
Open Information Extraction (OpenIE) aims to discover textual facts from a given sentence. In essence, the facts contained in plain text are unordered. However, the popular OpenIE systems usually output facts sequentially in the way of predicting the next fact conditioned on the previous decoded ones, which enforce an unnecessary order on the facts and involve the error accumulation between autoregressive steps. To break this bottleneck, we propose MacroIE, a novel non-autoregressive framework for OpenIE. MacroIE firstly constructs a fact graph based on the table filling scheme, in which each node denotes a fact element, and an edge links two nodes that belong to the same fact. Then OpenIE can be reformulated as a non-parametric process of finding maximal cliques from the graph. It directly outputs the final set of facts in one go, thus getting rid of the burden of predicting fact order, as well as the error propagation between facts. Experiments conducted on two benchmark datasets show that our proposed model significantly outperforms current state-of-the-art methods, beats the previous systems by as much as 5.7 absolute gain in F1 score.
Named entity recognition (NER) remains challenging when entity mentions can be discontinuous. Existing methods break the recognition process into several sequential steps. In training, they predict conditioned on the golden intermediate results, while at inference relying on the model output of the previous steps, which introduces exposure bias. To solve this problem, we first construct a segment graph for each sentence, in which each node denotes a segment (a continuous entity on its own, or a part of discontinuous entities), and an edge links two nodes that belong to the same entity. The nodes and edges can be generated respectively in one stage with a grid tagging scheme and learned jointly using a novel architecture named Mac. Then discontinuous NER can be reformulated as a non-parametric process of discovering maximal cliques in the graph and concatenating the spans in each clique. Experiments on three benchmarks show that our method outperforms the state-of-the-art (SOTA) results, with up to 3.5 percentage points improvement on F1, and achieves 5x speedup over the SOTA model.
Extracting entities and relations from unstructured text has attracted increasing attention in recent years but remains challenging, due to the intrinsic difficulty in identifying overlapping relations with shared entities. Prior works show that joint learning can result in a noticeable performance gain. However, they usually involve sequential interrelated steps and suffer from the problem of exposure bias. At training time, they predict with the ground truth conditions while at inference it has to make extraction from scratch. This discrepancy leads to error accumulation. To mitigate the issue, we propose in this paper a one-stage joint extraction model, namely, TPLinker, which is capable of discovering overlapping relations sharing one or both entities while being immune from the exposure bias. TPLinker formulates joint extraction as a token pair linking problem and introduces a novel handshaking tagging scheme that aligns the boundary tokens of entity pairs under each relation type. Experiment results show that TPLinker performs significantly better on overlapping and multiple relation extraction, and achieves state-of-the-art performance on two public datasets.
Automatic essay scoring (AES) is the task of assigning grades to essays without human interference. Existing systems for AES are typically trained to predict the score of each single essay at a time without considering the rating schema. In order to address this issue, we propose a reinforcement learning framework for essay scoring that incorporates quadratic weighted kappa as guidance to optimize the scoring system. Experiment results on benchmark datasets show the effectiveness of our framework. | OPCFW_CODE |
From my programs you might straight away discover how I Blend my actual-lifestyle encounter and educational background in Physics and arithmetic to provide Expert action-by-move coaching inside the Area of information Science.
Abstractly, a file is a group of bytes stored over a secondary storage device, which is generally a disk of some type. The collection of bytes may very well be interpreted, as an example, as people, terms, lines, paragraphs and pages from a textual document; fields and records belonging to your databases; or pixels from the graphical image. The which means attached to a selected file is decided entirely by the data constructions and functions utilized by a method to process the file.
Next, a default completion is selected quickly. Here's what What this means is for your programmer's imagined approach:
One of several all-time most favored programming products could be the spreadsheet. A spreadsheet is the twin of a standard programming language -- a language demonstrates every one of the code, but hides the data.
The present transform matrix is a very crucial and bewildering member of the state. Drawing nearly anything exciting Along with the Processing graphics library involves matrix transforms, but The present rework is invisible.
The opposite substitute is to indicate the condition. In the following illustration, the current fill and stroke colours are revealed above the canvas. Now, any time a line of code alterations the fill coloration, the programmer basically sees some thing change. Generating anything obvious can make it true.
I used to be not too long ago observing an artist Close friend start a portray, And that i requested him what a particular form to the canvas was destined to be. He said that he wasn't confident nonetheless; he was just "pushing paint all around to the canvas", reacting to and getting influenced from the types that emerged.
Nonetheless, I have usually proposed adjustable figures in the context where the adjuster already understands the meaning from the quantity. As outlined previously, I am extremely unpleasant With all the Khan Academy tactic of encouraging learners to adjust unlabeled figures and discover what they're for, and I think that it is a scenario of a Device remaining adopted devoid of an knowledge of what goal the tool serves.
Log in to Reply November 24, 2008 martinnitram please I would like help in two or three days relating to this c problem; develop a simple user authentication program using C language. The process must help you register usernames into a file and after that Look at if the consumer is valid. Understand that to examine the validity from the user, you must initial enter the username of that consumer plus the technique checks if this person exists while in the file that merchants the users.
In Laptop or computer programming, an assignment assertion sets and/or re-sets the worth stored while in the storage spot(s) denoted by a variable title; To paraphrase, it copies a price in to the variable.
In the example over, the home is now abstracted -- the code would not just draw just one mounted residence, but can attract a house anyplace. This abstracted code can now be used to attract many alternative homes.
Nonetheless, it elevated worries its conclusions may possibly happen to be motivated by "indications of publication bias among the revealed studies on pair programming". It concluded that "pair programming is not uniformly advantageous or productive".
This Regulate enables the programmer to maneuver around the loop at her own speed, and comprehend what is going on at Each and every step.
C Programming Language places no constructs to the file, and it could be read from, or composed to, in almost any manner chosen from the programmer.
Audio chat packages or VoIP software package could be helpful once the screen sharing software program would not give two-way audio functionality. Utilization of headsets hold the programmers' hands cost-free.
To sum up, I am Totally and totally captivated with both equally Details Science and Forex Trading and I am searching ahead to sharing my enthusiasm and information with you!
The canonical work on creating programming methods for Mastering, and perhaps the greatest guide ever written on Understanding in general, is Seymour Papert's "Mindstorms".
As you are able to see, Reside coding, on its own, will not be Specially useful. The programmer however must sort not less than an entire line of code ahead of viewing any influence.
test the following instance using the ‘Try out it’ choice obtainable at the very best ideal corner of the following sample code box −
"This is the suitable triangle. I would like a distinct triangle." She adjusts go to this website the triangle's points right into a a lot more roof-like shape.
She then moves to the overall case by turning Individuals constants into variables. This is an illustration of how the atmosphere can inspire in this way of thinking, starting up with your house from before.
The programming surroundings exhibits precisely the same ruthless abbreviation as this hypothetical cooking clearly show. We see code on the left along with a final result on the best, but it's the measures between which make any difference most.
Also, most musicians Never compose complete melodies inside their head after which generate them down; in its place, they noodle all-around on a instrument for quite a while, twiddling with styles and reacting to what they hear, modifying and sculpting.
The neutrality of this informative article is disputed. Suitable dialogue could possibly be found over the talk web site. Remember to usually do not take out this information until finally circumstances to take action are met. (January 2013) (Find out how and when to eliminate this template message)
This course is for you personally if you wish to understand R by executing This system is for yourself if you prefer enjoyable issues
This essay prompt some capabilities and references that address these queries, but the queries make any difference much more than my answers.
A meta-Investigation uncovered pairs usually consider a lot more design choices than programmers Operating on your own, arrive at easier a lot more maintainable patterns, and catch structure defects previously.
The above illustration encourages the programmer to check out the offered functions. A learner who'd never Assume to test typing the "bezier" function, with its unfamiliar title and 8 arguments, can now easily bump into it and find what it's about. | OPCFW_CODE |
A service provided by many
computers on the Internet that gives any user restricted access to files, generally including the ability to transfer files from the computer.
An index system that helps you find files in over 1,000 FTP sites.
The process of verifying the identity of a user, usually by means of a user ID and password.
A measurement of how quickly a modem transfers data.
A program or computer that can access services on another program or computer (the server).
The suffix used for all hosts on a particular network which identifies them as being a part of that system. Often one of the hosts on that system uses only the domain suffix as its whole name.
Domain Name System
Or DNS. A system for translating Internet IP numbers into easily remembered names. The user can use the name, and the DNS looks up the number which identifies the machine you
are trying to access.
Short for electronic mail. This is a system that lets people send and receive messages with their computers.
program used to find information about a user on a host computer. Some hosts do not provide finger service.
File Transfer Protocol. Used to move files around on the Internet. This
protocol uses two simultaneous connections to the host, one for the transfer and one for commands.
Acronym for Graphics Interchange Format. A format developed in the mid-1980's
by CompuServe for use in photo- quality graphics images. Now a commonly used format on the Internet.
Any computer connected directly to the Internet which provides services
accessible to others. A "hostname" identifies that computer.
Acronym for Hypertext Markup Language. A set of formatting tags which determines how a document is displayed when
viewed by a browser.
Acronym for Hypertext Transfer Protocol. The protocol for moving hypertext files across the Internet. Requires an HTTP client program (browser) on one end, and an HTTP server
program on the other end.
Or simply Link. Text or graphic on which you click to move to another document on the World Wide Web.
A worldwide system of
computer networks. Networks connected through the Internet use a particular set of communications standards, known as TCP/IP, to communicate. This standard allows any type of computers to talk to each other.
Acronym for Internet Protocol.
Internet Relay Chat. A service for multiple users to "chat" or talk simultaneously over the Internet.
Acronym for Joint Photographic Experts Group. A standard for compressing and storing still images in digital form.
Acronym for Multipurpose Internet Mail Extensions. A
standardized method for organizing divergent file formats according to each file's MIME type. When Internet software retrieves a file from a server, the server provides the MIME type of the file, and the file is
decoded correctly when transferred to your machine.
Acronym for Moving Pictures Experts Group. A standard for compressing and storing motion video and animation in digital form.
A computer that collects newsgroup data and makes it available to newsreader client programs.
The name for discussion groups on the Internet. Successor to older "bulletin boards".
Acronym for Point-to-Point Protocol. This is a method for connecting
computers to the Internet via telephone lines, similar to SLIP.
A set of rules that describe how computers transmit information, especially across networks.
computer or software that provides resources, such as files or other information, to client software running on other computers.
A file, typically four lines long or so, that people often insert at
the end of electronic mail messages or news articles, telling something about the sender.
Acronym for Serial Line Internet Protocol. This is a method for connecting a computer to the Internet
using a telephone line and a modem.
Simple Mail Transfer Protocol. A protocol used to transfer email. SMTP transfers mail from server to server, and remote users must use Post Office
Protocol (POP) to transfer the messages to their machine using client software.
Acronym for Transmission Control Protocol.
A program that lets you connect to other computers on the Internet.
In USENET news, a series of related articles grouped together.
Acronym for Uniform
Resource Locator the standard way to give the address of any resource that is part of the World Wide Web.
The articles of the Usenet distributed bulletin board system.
Any implementation of the tcp/ip sockets protocol which runs on Microsoft Windows.
World Wide Web (WWW)
A collection of online multimedia documents housed on Internet servers around the world.
To access these documents, you use a Web browser. When a browser accesses (or hits) a page, the server sends the document to your computer to be displayed. | OPCFW_CODE |
Documention for "The Application Context" is confusing
See: http://flask.pocoo.org/docs/appcontext/
There seems to be two main issues with this page.
I'm not sure how to say it, but the prose is vague and lacks focus. The effect in any case is that the concepts are difficult to grasp. [1] [2] [3]
Syntax errors and wordiness [4] [5] [6]
[1] "The application setup state in which the application implicitly is on the module level."
[2] "In contrast, during request handling, a couple of other rules exist"
[3] "There is a third state which is sitting in between a little bit."
[4] "The main reason for the application’s context existence is that in the past a bunch of functionality was attached to the request context in lack of a better solution. Since one of the pillar’s of Flask’s design is that you can have more than one application in the same Python process."
[5] "To make an application context there are two ways."
[6] "The context is typically used to cache resources on there that need to be created on a per-request or usage case."
l second that a rewrite would be great. Perhaps some inspiration can be had from this SO answer:
https://stackoverflow.com/questions/20036520/what-is-the-purpose-of-flasks-context-stacks
I think one reason people are confused is because there are app and request contexts and it's not clear why they are there and what the lifetime is. That might need some clarification in the docs.
:+1: As a newcomer trying to make sense of things, this is very very hard.
+1 for this issue. @Ceasar Have you finished the writing?
The appcontext definitely needs some clarification in the docs. The docs say that the appcontext "will not be shared between requests." However, I don't really see the point of separate teardown_request and teardown_appcontext functions if both the request context and the appcontext are torn down with every request.
I believe that the appcontext is not shared between different threads, but is shared between different requests in the same thread. This is consistent with some of the documentation (and it's consistent with a sensible design.) Reading through docstrings I've seen contradictory information in various places.
@lukeschlather The appcontext is not shared between requests, for the simple reason that persisting it between requests would amount to global state. The point is to have flask.g available in scripts as well, where no HTTP request (and therefore no request context) is available.
The docs may be confusing but they're not technically wrong. If you find a logical contradiction please do point those out, because even if the docs fail to explain those concepts well, they should at least not contradict themselves.
I think that document in question would gain from adding a top down perspective on "why" specific design decisions were made. This top down perspective would involve explaining the physical constraints involved in solving the problem of delivering a specific internet service, e.g., a single version of an app based on Flask. With that understanding to build on, the answers should fall into place like a jigsaw puzzle.
We know that the app may be distributed as identical "clone" processes each on a different physical server(*1), each clone running exactly the same software logic and having exactly the same configuration - let us say the same DNA. The reason for this is scalability - client traffic might suddenly increase, servers might suddenly fail, yet the system needs to respond seamlessly so that the clients do not experience delays. Therefore a design decision is made: a process should never store any "session" state necessary to complete a client-server session. (This is one the REST requirements). This means that for a specific client session, each request from that client session can be routed to an arbitrary clone, doesn't have to be routed to the same clone for each request in a session, and the server clones don't have communicate DIRECTLY to each other about that client session state. Of course, a clone can pass state information to the client, for the client to pass that same state information onto the next serving clone in the session sequence. (*1: Arbitrarily assuming there is no reason to run multiple clone processes on the same physical server, but perhaps I'm wrong).
Now the reason for the setup and teardown of the so-called "context global" variables app_ctx, g, request, and session at the start and end of a request is clear: that clone might never service that client session again, so the memory resources are reclaimed for other use. Note: "Global" is not really a specific enough term and could cause confusion. The actual scope of these variables is limited to the time span of a single request and the physical span of a single thread in a single process on a single server.
Although app_ctx, g, request, and session share the same single request scope, they are set and read by different characters for different purposes:
session: the session state data shuttled between the client and the server clone so that the next server clone knows where to pick up.
request: data sent from the client to the server clone
g: temporary storage for use by the application code (but not the Flask libraries and extensions code) for database info, etc.
app_ctx: like g but used by the by the Flask libraries and extensions, and not the application code.
It is stated that multiple non-identical Flask apps my be running in the same process space. I guess this has something to do with optimizing load balancing but I don't have any specific knowledge about what problem this paradigm solves, so I'll just take it at face value that it happens. In any event, that paradigm is not the one necessitating the so-called "context global" variables app_ctx, g, request, and session to be set up and torn down with each client request.
Also, the paradigm of accessing the application from a shell is unrelated to production usage of serving clients. It is just a convenient way to run scripts for things like migrating a database and testing. The existing structure of "g" variable happens to be available and useful due to the clever design of Flask, but there is no deep meaning beyond that. There would have been other less clever ways to chieve the same result.
In conclusion, sometimes the big picture takes too long to explain, and giving a simplified recipe is more pragmatic. But it can only go far, and it will eventually lead to confusion and dogma. It's like the difference between using the bible to try to explain the existence of the universe, and using your understanding of the universe to explain the existence of the bible.
I've not spent the time reading up on the history on this issue, so may reiterate points. Here is what I found confusing in the documentation regarding the flask.g object:
The application context is created and destroyed as necessary. It never moves between threads and it will not be shared between requests
Since https://github.com/pallets/flask/commit/1949c4a9abc174bf29620f6dd8ceab9ed3ace2eb it is shared between requests.
The context is typically used to cache resources that need to be created on a per-request or usage case.
Still not per-request safe, as this paragraph suggests.
And in the api documentation for App Globals, there is this paragraph:
To share data that is valid for one request only from one function to another, a global variable is not good enough because it would break in threaded environments. Flask provides you with a special object that ensures it is only valid for the active request and that will return different values for each request. In a nutshell: it does the right thing, like it does for request and session.
This clearly states the previous functionality when the g object was on the request context stack.
The only hint that it is not a per-request safe object any more, is this passage, which was added in the referenced commit:
Starting with Flask 0.10 this is stored on the application context and no longer on the request context which means it becomes available if only the application context is bound and not yet a request.
And as it is not super clear (what is the consequence of being on the app context, with regard to per-request data), and contradicts all other documentation regarding this object, one may come to the conclusion that it is not accurate.
I hope the tone comes across as constructive feedback, and nothing else. I find flask to be a great library and want it all the best. Cheers for all the work so far!
| GITHUB_ARCHIVE |
For Online Computer Support, Ask a Computer Technician
Hello & Welcome to JustAnswer. My name isXXXXX I will do my best to assist you.
If the mail server goes down (the exchange server) it will stay on the internet and will attempt to resend when the server comes back to life.
To make this better, i need to ask you a few question so i can understand your model more clearly.
Can you tell me why did you chose to host exchange at your office and why not online?
Please leave me a short message when you are back so we can continue.
Hi we used to get email via a pop3 connector but this often seemed to be have problems, so when we set up the server with SBS2011, using exchange on our local server was recommended as the best solution.
We are not bound to this however and open to considering other ways to configure email.
That brings management and extra cost in the world of cloud technology.
Eventually, an IT Support Company.
If it is just for emails, then you could get a hosted solution (hosted at a secure data center with backup and no downtime)
To make the existing solution better, there are a number of things that can be done.
#1 The first one is to get a UPS to work with the server.#2 Setup backups for the server#3 Setup back up MX records (so that if the mail server is down) then the mail will be sent to another server (which can be an online hosted one)#4 Turn Windows Updates Off#5 Setup Antivirus for server
Normally, people do not keep mx records for more then one server because SMTP for most of the mails (depending on the configurations) adds a flag to the message (to retry) if the recipients mail server is down.
1. we have a UPS set up with active monitoring and automatic shutdown
2. we have regular backups scheduled
3. this is probably the bit we need to sort out
4. why turn off windows updates ? - this is currently done automatically and managed via SBS2011
5. currently use ESET nod32 which seems to work well
3. This is not always needed, it is just one way - Use it only if you are having issues because if it setup, then users may experience problems and this will require extensive troubleshooting to configure it to fit requirements.4. Windows Updates should be turned off on production servers because (if update, updates something that results in incompatibility) it will bring up unnecessary issues. It should be set to download manually, so you can check it every week or so and update it manually. By doing this, you will be aware of it and if a problem happens, you will be able to resolve it easily.
I have seen issues where certain things becomes non functioning after an update and then people spend hours resolving issues that comes after update.
MX Backup - Here's How
3. Once it's properly setup, it's good
ok that makes sense we did have an issue with an outlook update, which caused a problem recently and it did take a little while to sort out.
I will have a look at setting up the MX backup, I had a quick scan through and should be ok to do, but will need to do this later.
I will finish now and get back to you if I have a problem
thanks fro the helpful reply
Take a look at this site: http://www.mxsave.com
Also, let me know when you plan to setup back up mx so i can find you reliable solutions and assist if any problems occur.
This should be done over the weekend. | OPCFW_CODE |
Wow! I was off for a while and many things have been happened in this period. But it is a few days that I’m back. One of the first things that I decided to do was to start testing a new version of Fedora much sooner. So, for the first time I downloaded an Alpha version of Fedora DVD to test and report any bugs encountered.
I have a ThinkPad X61 laptop which doesn’t come with any DVD/CD drives. So, I usually install Fedora using Hard disk installation method. Before Fedora 12 (IIRC), hard disk installation was only supported using ext or Fat partitions. Also, it doesn’t support installing from LVM partitions. As a result, I had a 10GB non-LVM ext3 partition solely for this purpose which was really annoying considering my 120GB hard disk. Fortunately, in Fedora 12 and 13 hard disk installation from NTFS partitions was supported, so I happily removed the 10GB ext3 partition and added it to the pole of LVM physical volumes (I have a NTFS partition for my Windows OS, and I put Fedora DVD iso images for installation there).
Well, I downloaded Fedora 14 Alpha iso and started the installation… but after selecting the desired partition and images directory, the installer simply stopped doing anything! I tried playing with different options and trying again, but with no progress. So, I filled this bug and it was discovered that this problem occurs only in NTFS partitions and not in ext4. Unfortunately, it seems that this bug is considered not-important just because that the documentation (which is outdated IMHO) says that only installation from ext and fat partitions are supported (has anybody still FAT partitions in his hard disk?!! or people should stop using LVM?).
IMHO, if hard disk installation is going to remain a really “useful” option, it should at least support either installation from LVM partitions or NTFS partitions.
Following that bug report, I decided to try installing from an ext4 partition. Fortunately, my 120GB hard disk is replaced with a 500GB hard disk and I use the 120G hard disk as an external USB hard disk. I had not touched its partition table since the replacement and retained its contents for backup purposes. However, for this test I was forced to create an ext4 partition and try to install from it. This time, I could advance in the installation process past the partitioning section.
I found two nice tweaks in the installer compared to older versions:
1. It correctly detects my Windows partition and does not try to use my sda1 (which is my recovery partition) as my Windows partition to boot from.
2. In the date/time configuration window, the “System clock uses UTC” option is not checked by default (apparently because it knows that I have a Windows installed and this option is not appropriate for dual boot systems who use Windows). This was one of the things that I always noted to almost everybody who wanted to try installing Fedora, as almost all of them wanted to install it beside their Windows.
Unfortunately, the bugs are still present in Fedora 14 Beta RC2 (thanks to delta isos I was able to jump over different releases with not too much download) and it seems that they’ll be in Fedora 14 Beta release too. Considering the comments in both bugs, I’m afraid that they’ll be taken seriously for Fedora 14 final release; which means that I might be able to install Fedora 14 on my system from hard disk which would be a considerable regression in my point of view. (Yes, certainly I can setup a server on another system and install from network (if it still works though!), but that’s really undesirable. And I don’t like to buy an external DVD drive just for this purpose!
Well, not a very interesting experience of trying Fedora early pre-relaeses! But if it’s just for the pre-release versions, it’s still nice. I’m afraid of encountering the same problems in the final release… 😦
OK, that was too much! I did other things too. First, I decided to once again tray to have a look at what’s annoying in PackageKit for me, and report the problems in a reasonable way. The result was a set of bug reports and a patch (30251, 30276, 30284, 30252 and 30240) which will hopefully make Fedora package management system a bit more pleasant for some of us (usually people who doesn’t have a fast internet access). Three of the bugs are already fixed (thanks Richard) and I hope that the other 2 will be fixed soon. Now, PackaeKit should correctly support split media repositories (e.g. Fedora installation CDs in addition to Fedora installation DVDs) and be more well behaved in some scenarios.
Second, I’ve also joint Fedora Localization team to contribute a little to Fedora Persian translation.
And finally, I’m starting my work on yum which I talked about it about 3 months ago! Who knows, maybe I can make it a Fedora 16 feature 😛 | OPCFW_CODE |
What things you should consider when planning to self-publish a tech book? How to validate is there enough interest for the topic? Very inspiring discussion with Tero Parviainen author of Build Your Own AngularJS.
Brief introduction to Tero:
Tero Parviainen is an independent software developer who has been active in Angular and Clojure communities. He has written two books (Build Your Own AngularJS and Real-time Web Application Development using Vert.x 2.0), organizes Clojure Cup and also has given some great talks on various conferences.
Question: What were the reasons for writing your latest book "Build Your Own AngularJS"?
I was getting into Angular and wanted to learn it on a deeper level than what most tutorials give you. The main reason for this was that I’ve had issues with some technologies I’ve used during my career, like Rails and Hibernate, that were caused by never really understanding how they work. There’s often too much magic involved, and I find it uncomfortable to use a framework that feels magical. I didn’t want that to happen with Angular, and based on what I was reading it looked like it might. I decided to actually dig into the source code.
So I started a process of reverse engineering Angular, to try to figure out what it does and why it does it. Then it occurred to me that I could write about what I’m learning, since I was going through all that trouble and other people might find that stuff useful. So I wrote an article about Angular’s change detection. It was received really well, and people started asking for more.
At the same time I’d just become an independent consultant and was interested in exploring alternative sources of income. I had written a short book earlier, and thought that it might be something I was both willing and capable of doing. So I decided to extend the Angular articles into a book project.
Question: Did you start writing the book before Angular 2.0 announcement and following "community shitstorm"? If yes, how did it affect writing process of the book?
Yes, I started writing the book before Angular 2 was announced. I released the first chapters in January 2014. I think the very first ideas regarding Angular 2 were publicised in ng-conf in that same month, but the more concrete announcements were only made last autumn.
It hasn’t really affected the writing process much, as the book doesn’t cover Angular 2. Angular 2 is a completely new implementation of the Angular concepts (which is also the reason some people got upset about the announcement), so to cover its implementation you need another book.
Angular 2 has only recently become pre-alpha, and it’s still very much in the experimental stages. This kind of book can only be written about a relatively stable technology, whose codebase isn’t undergoing major changes all the time. Also, to get the most out of this kind of book, you need to have some prior experience with the technology. It’ll take one or two years before Angular 2 is in that kind of place. In the meantime, Angular 1.x isn’t going anywhere.
Question: How has the book and the publicity you have received affected your freelancing career?
The book - as well as the articles and conference talks around it - have certainly increased my online presence a bit, especially within the Angular community.
In terms of direct effects to my consulting work, there hasn’t been many yet, but that’s mostly because I’ve been working the same full time contract for this whole period, and haven’t been actively looking for work.
In the long term, I guess the effects will be positive. Someone may think of me when they have front-end work that needs doing. That’s actually something I’ve seen happen already. Also, when a potential client googles me, they’ll see stuff I can be proud of.
Question: Any advises for those who are thinking about writing a self-published tech book? Anything you would do differently?
Most importantly, Hofstadter’s law applies. It’s a big undertaking - probably bigger than you expect. My biggest problems have been about not understanding the full scope of the project, and the deadline slips that have occurred because of that.
I think you need to validate that there’s interest in the topic you’re planning to write about. Write articles and see what kind of response you get. If there isn’t a strong one, it may not be something people would be willing to pay for.
You need to write a lot more than just the book. Write articles around the topic. I have a couple of relatively popular Angular articles out there, and they’ve probably been my biggest driver for book purchases. Also, if you, like me, aren’t already well known in the community you’re writing for, having good content online helps people see whether your work is something they’d be willing to pay for.
As to the writing itself, all I can say is work on it a little bit every day. Even if it doesn’t feel like you’re making much progress, it adds up. Also, doing one hour every day for five days is better than doing five hours in one sitting, since it lets your brain process the content in the meantime. Then, once you do sit down and write, you will write better.
Finally, what they say is true: The first draft is always shit. Get it out without thinking too much about how good it is and how well it reads. Once that’s done, start revising. It’s just easier that way.
Question: Can you give short overview what Clojure Cup is?
Clojure Cup is an online hackathon where Clojure and ClojureScript programmers from all around the world can team up and build something from scratch in 48 hours and put it up on the web for the world to see. It’s like Rails Rumble or Node Knockout, but for Clojure and ClojureScript.
Question: You have organized Clojure Cup two times. How did it get started and what is the future of the cup?
I’d had this vague idea of doing “Rails Rumble for Clojure” for a long time, and a couple of years ago I had some time on my hands and thought I’d test the waters. I contacted a few Clojure open source luminaries, and asked them whether they’d be interested in judging a contest like that.
Maybe half of the people I contacted said yes, so next I set up a teaser website announcing the idea and the dates, as well as the names of the people who’d agreed to judge. It got on the front page of Hacker News, which meant it was seen by tens of thousands of people that day. At that point I felt like it was something I actually had to do. It all basically just went from there.
I’ve organised the Cup twice now, and it’s been fun. I would very much like to see a third one this fall. I’ll probably hand over the head organizer responsibilities to someone else this time. Finding a successor and working with them to make sure everything goes smoothly is something I’ll start doing very soon.
Question: Have you used Clojure or ClojureScript in a production software?
What kind of things you consider when selecting development stack for the greenfield project?
I don’t have a repeatable process for that, but it’s always some combination of technical suitability, maturity, and, to be honest, just gut feeling. The customer’s existing software stacks matter a lot too. If not directly in terms of technical interoperability, then indirectly in terms of how well they’ll be able to maintain and develop the new stuff.
I’m actually fairly conservative when selecting technologies for customer work. I think it’s important to realise I’m making decisions and recommendations that will have impact for years to come. I don’t want to use some new thing just because I think it’s cool, if there’s a big risk the customer will end up with something that’s difficult to work with because it doesn’t have a strong community behind it or they can’t find the people they need to maintain it. I sometimes find that I’m really more conservative about tech choices than the customer actually wants me to be, but when that happens I’m usually happy to concede. :)
Any links, blog posts, products, services etc. (yours or someone else's) you want readers to check?
For anyone interested in front-end development, I’d recommend checking out David Nolen’s presentations about ClojureScript and Om. The combination of ClojureScript and React seems to be an explosive one, and to most front-end developers it’s a bit of a paradigm shift. It's always healthy to expose yourself to new paradigms.
I’ve been betting into Gitchat lately, as there’s a chatroom for HelsinkiJS and another one for Clojure Finland. It has the potential to be a really nice new communication channel, especially for people like me who could never really be bothered about IRC. I used Gitchat for Clojure Cup 2014 too, and it worked out great.
Wow, there's plenty of ideas for the mind to process. I'm definitely going to check those resources Tero mentioned.
Thank you Tero for your time! If any of you readers have questions I am pretty sure Tero would like to continue the conversation in the discussion section below or in the hacker news.
Remember to check also other interviews! | OPCFW_CODE |
Future proof knowledge system: Plain text with Obsidian
Makzan’s Dispatch 2020 week 35
Last week, I shared about how I capture notes inputs from different sources. This week, I would like to share how I store notes in plain text format.
Plain text storage is a future proof storage system. Content is not locked into any software, any system. In the future, whatever storage system or service our technologies become, the plain text will be compatible with it.
But plain text files are dump. They are not smart. That’s why I created my own software to manage relationships between notes. The context of notes include tags, bi-directional links, creation date, and note-replies.
The software I built fits my needs, but it is expensive to build and maintain software just for my own use. And it is so personal that open-sourcing it will require extra efforts that I can’t spare now. I was always thinking about how to combine the benefits of software functionalities to make the notes smart and yet keeping the simplicity of the plain text format.
I find the answer when Obsidian made its debut months ago.
I have a folder containing my 10 years of notes, in plain text format. They are collected from my previous different systems that I have used. The folder is stored in a cloud drive with backup to local and external hard disks.
Obsidian uses the folder in file system as a vault. Within the vault, Obsidian indexes all the links between markdown files and creates connections between them into a graph view.
Since the files are just plain text files in the file system, there is no lock-in to any one software. All the meta information and linking relationship are right inside the markdown files.
Besides Obsidian, I can also apply different software to those files.
For instance, on the iPhone and iPad, I use 1writer to edit the files. When writing long-form, I use iA writer. In Mac, I use nvALT for lightweight access. I use DEVONthink for folder organizing and filing suggestion with its powerful category learning.
By creating links between my notes, I connect the hidden dots and transform the written notes into my knowledge.
Next week, I will share another part of my knowledge system, images clippings and documents.
Links worth sharing
→ Web content accessibility guidelines 2.2 is in public draft review
→ Making Facebook.com accessible to as many people as possible
Meanwhile, the new Facebook login page uses an image as close button:
→ Leading-Trim: The Future of Digital Typesetting
→ Simulate Mobile Devices with Device Mode in Microsoft Edge DevTools
→ Enhancing User Experience With CSS Animations
→ Omastsuri—Open source browser tools for everyday use
Several useful tools. Bookmarked.
Code worth sharpen
What happens when there is min-width and max-width conflict in CSS?
Also from https://codepen.io/argyleink/pen/gOrraWq
no min / max? use width
width > max-width? use max-width
width < min-width? use min-width
min & max? use min-width
Until next week, | OPCFW_CODE |
Tech Tuesday: Issues to be aware of when creating maps that use text ODBC drivers
I get a lot of calls from people saying that SmartConnect isn’t working right. It’s either not reading their data correctly, or it’s not reading all the columns from their source file, or it’s actually saying that the source file is completely blank, when in fact it does have data in it. Most of these issues are due to limitations in the Microsoft text ODBC drivers. There are certain conditions that need to be met when reading files with ODBC text drivers.
Multiple file extensions do not work. So for example, if you have a file named “mysourcedata.old.db.csv.log.txt” will not work. Even though the last extension is .txt, having multiple file extensions on the file will cause it not to be read. Also any additional periods in a file name get treated as a file extension. For example if the file is named “Inv. Items for import.txt” that extra period in the file name is going to cause the file to not be read.
The maximum file length of a text file name cannot be greater than 64 characters. Files with names longer than 64 characters long will get read as empty files.
The maximum length of individual column names cannot be longer than 64 characters.
The maximum total length of the pathname plus the file name cannot exceed 255 characters. So for example if your pathname to the files is c:\source files\my source data\accounting\company name\2017\August\Daily Entries\GL\Files to get imported\ this would use up 106 characters of the allowable 255 characters.
The max number of columns that can get read by the ODBC text driver is 255 columns. So even if you have 400 columns in your source file, only the first 255 will actually get read. I see this most of the time with Concur import files. They will supply you with a CSV file with over 400 columns in it, that cannot be read by the standard Microsoft ODBC text driver. You need to remove unneeded columns before it will be able to be read.
Text files are single user. If a user has a text file opened, and another user attempts to read the file using the Text ODBC driver, it’s going to read the file as blank. The Text ODBC driver need to have exclusive access to the file in order for it to be read.
Once these conditions are met, you should be good to go!
Integrate & Automate without Any Code.
SmartList Data has Never Been Faster.
The Easiest Way to Report on GP Data.
Leave a Comment | OPCFW_CODE |
Billboard clipped on right side
In Cesium 1.31
I have a large number (500+) of quite large (200x100pixel) billboards on my map, and this problem only occurs when the number of billboards is large. The billboard image are canvases that I draw with javascript within the application.
For some reason, the billboards are clipped on the right hand side:
This can be compared to using the same canvas images as icons in Open Layers:
What is really strange, is that I can almost solve the problem if I set the imageSubRegion to be bigger than the canvas itself.
var entity = {
position: Cesium.Cartesian3.fromDegrees(feature.geometry.coordinates[0], feature.geometry.coordinates[1]),
billboard: {
horizontalOrigin : Cesium.HorizontalOrigin.LEFT,
verticalOrigin : Cesium.VerticalOrigin.TOP,
image: ctx,
imageSubRegion: new Cesium.BoundingRectangle(0, 0, ctx.width+2, ctx.height+2),
height: milsymbol.getSize().height,
width: milsymbol.getSize().width,
pixelOffset : new Cesium.Cartesian2(-milsymbol.getAnchor().x, -milsymbol.getAnchor().y)
}
}
Now this will be displayed:
Better but not perfect
If we zoom out a lot, we can also see that some of the missing lines from the right hand side is rendered on the left hand side of some symbols:
Notice the vertical lines that shouldn't be there
I figured out that Cesium stores the images in some kind of atlas, but I couldn't find where in the code this was done, and what could be the reason for this error. Please let me know if you need more information about this.
Added:
I tried to revert to Cesium 1.27 to see if #4675 introduced in 1.28 messed anything up, but the result in 1.27 was almost the same as in 1.31.
@spatialillusions thanks for the detailed writeup! It would help if you could
Create a minimal code example to reproduce this issue and try to push the limits (number of billboards, size of billboards) to see where the tipping point is.
Investigate TextureAtlas.js if you have the time. The issue could be there or it could be in BillboardCollection.js perhaps triggered by the vertex buffer growing pass a certain size or an issue with WebGL instancing (which you could try disabling) or the attribute compression. A pull request with a fix would be a very welcomed contribution!
I'll try to make a code example and get back with that.
I just had a quick look at the code and thought that this code might trigger some kind of error if it is called a bunch of times with changes in texture width, but I'm not sure if this is the reason.
https://github.com/AnalyticalGraphicsInc/cesium/blob/00cee6b97568ea7cf38b473e46540afb871347b5/Source/Scene/TextureAtlas.js#L187
I have made a small example here and I'm able to reproduce the issue with 225 symbols.
The included zip file contains a html file, and milsymbol.js that is used to render the icons. (I'm guesing that you can use a bunch of external files as well, but this is the fastest way to reproduce the issue.)
Archive.zip
At the moment I also have an online version of the sample:
http://spatialillusions.com/milgraphics/examples/slf-cesium-debug/
When I was testing, I noticed that to trigger the bug the billboards must have different sizes, I think this might be that the code will have trouble to find a suitable node in the atlas then, and because of this I have added the text BAZINGA, to every seventh node. 🙂
If you zoom out you will be able to see the vertical lines on the left hand side:
And also that some of the icons are missing a few pixels on the right hand side:
I tried looking into looking into the code some more, but how images are placed in the atlas and how the boundingboxes are calculated are a bit hard to understand and debug in a simple way. If I find some time I will try to look into it some more, but my guess is that it is easier to fix for someone who understands the atlas code better than me.
@spatialillusions Thanks for this detailed report. One thing I'll point out is that your milsymbol library does put a good size pad on the left side, but puts the right edge of the symbol directly against the right edge of the canvas, which makes it prone to this sort of clipping. You could work around this problem by moving a pixel or two of padding from the left side to the right.
That said though, we clearly have an issue in Cesium here. It looks like the right-most edge of pixels can possibly be lost due to sub-pixel positioning. This video was recorded while moving the mouse at the bottom of the screen, well below (in screen space) the icons being recorded, with the camera at a nice tilt. This resulted in whole-pixel movements only at the bottom of the screen, and sub-pixel movements in the area being recorded:
You can see the right edges of several icons blinking on and off. Seems like the billboard texture coordinates have some sort of rounding or quantization issue going on.
CC #172
@emackey I know about the padding on the left hand side. I try to make the spacing as little as possible, and if you remove the text information on the icons they will fit tight on the left hand side as well. The thing is that since I support multiple output formats I need some way to calculate the approximate length of the text string i pixels, and that calculation is far from perfect, and in some cases it will add extra space to be sure not to cut the labels.
If it turns out that it is too hard to fix this bug on your side I can add a few extra pixels of padding in the next release of milsymbol.
FYI, on my machine, this issue (vertical black lines and the right edge clipping) was noticablly improved with the change suggested in #3411 (comment), but definetlly not fixed.
Unfortunately, I currently don't have time to get into the failing spec.
So if someone has time to look at this he's welcome to open a PR with this change.
Also, given this compressed coordinates precision, isn't it better to limit the texture size of the atlas?
this happens because atlas._textureCoordinates saves the billboard's coordinates as floating points, this is completely fine at start but as the atlas grows it loses accuracy and starts taking the wrong coordinates.
In order to reproduce this issue you should use an example where there are a lot of different textures in the atlas which caused it to reach a very large size.
I'm pretty sure this can be fixed by using a larger borderWidthInPixels as the atlas grows to make up for the lost of accuracy
| GITHUB_ARCHIVE |
|Problem and Possible Cause||Solution|
|Problem: The UPS will not turn on|
|The UPS has not been turned on.||Press the POWER ON button.|
|The UPS is not connected to AC power, there is no AC power available at the wall outlet, or the AC power is experiencing a brownout or over voltage condition.||Make sure the battery has been inserted into the UPS when attempting to turn on the UPS without AC power. In the event that the UPS receives no AC power and the battery is connected, a cold-start can be initiated. Press and hold the POWER button until the UPS emits two beeps.|
|Problem: The UPS is on, the POWER button illuminated red|
|The battery is worn or needs repair.||Contact Schneider Electric IT (SEIT) Technical Support for more in depth troubleshooting.
|Problem: Connected equipment loses power|
|A UPS overload condition has occurred.||
Remove all nonessential equipment connected to the outlets. One at a time reconnect equipment to the UPS.
|The UPS battery is completely discharged.||Connect the UPS to AC power and allow the battery to recharge for eight hours|
|Connected equipment does not accept the step-approximated sine waveform from the UPS.||The output waveform is intended for computers and peripheral devices. It is not intended for use with motor driven equipment.|
|The UPS may require service.||Contact SEIT Technical Support for more in depth troubleshooting.|
|Problem: USB charging stops and the POWER button LED alternately illuminates green / amber|
|The USB port on the UPS is overloaded or has encountered an error.||Disconnect device from the USB port on the UPS. USB charging will resume when the LED turns green. Contact SEIT Technical Support if the LED continues to alternate green / amber|
|Problem: USB charging stops and the battery pack capacity indicators LEDs all flash simultaneously|
|One or two USB ports on the mobile power pack is overloaded or has encountered an error.||Disconnect device(s) from the USB port(s) on the mobile power pack. When the mobile power pack is not paired with the UPS the power pack will enter safe mode if the USB error has not been resolved within 30 seconds.|
|Problem: The Back-UPS has an inadequate battery runtime|
|• The battery is not fully charged.
• The battery is near the end of useful life and should be replaced.
|Leave the Back-UPS connected to AC power for 16 hours while the battery charges to full capacity.
Note: As a battery ages, the runtime capability decreases. See Troubleshooting to order replacement batteries.
|Problem: USB charging is slow|
|Charging a device using the UPS's USB charger is slower than the device's original USB charger||The amount of power a device draws depends on its compatibility with the USB Battery Charging Specification 1.2. Compatible devices can draw more power than devices that are less compatible. For devices that can charge using input greater than 1A make sure that the device is connected to the 2.4A USB charging port.|
|Problem: Battery charging is slow|
|The charging time of battery varies depending on the charging connection.||Charge the battery inside the UPS for best results. Using the micro-USB port to charge the battery will require more time. The speed is also dependent on the type of USB charger. Some USB chargers support 1A and others up to 2.4A. More powerful chargers will reduce the time required. USB ports on a PC can also charge the battery but older PCs only support 500mA which will result in even more time to charge.| | OPCFW_CODE |
No. it won't boot if you import/save old settings. I don't know why
that image doesn't included mt7610e driver, but I think it may still have some problum with importing old config
No. it won't boot if you import/save old settings. I don't know why
Ah, I did indeed use "save old settings". I will try again, without saving settings. Thanks.
Before I reflash, can you just give me some info on what is baked in into this image? Is "dropping frame due to full tx queue" fixed in this this patch?
The reason I am asking is because everything works even in the old images, both 2.4GHz and 5GHz. It is just that 2.4GHz will fold during heavy traffic
OK, I re-flashed and cleared settings and it booted fine! I had to manually re-enter settings through LUCI but it is a small price to pay. Unfortunately, 5GHz is gone but it was always rather short-range on Archer C2.
I will run this image couple of weeks and see if 2.4GHz folds during heavy traffic.
Hmm, it is marginally better but still broken.
I can still lock it at will with only two devices: file copy on laptop and Bandwidth Test on iPhone. It locks and I receive: "
ieee80211 phy0: rt2x00lib_rxdone_read_signal: Warning - Frame received with unrecognized signal, mode=0x0001, signal=0x010a, type=4" in Kernel log. Traffic stops flowing (but WiFi device is still connected). Basically same problem but different error.
Also, 40MHz setting on 2.4GHz doesn't seem to work. It is one channel (20MHz) only regardless of setting.
I have run various ports of both OpenWRT and LEDE on mutiple Archer C2 v1's I use as AP's.
All non-factory builds suffer from bug (?) that makes 2.4GHz WiFi freeze when exposed to high load.
Typically, kernel log will have entry "“phy0: rt2x00queue_write_tx_frame: Error - Dropping frame due to full tx queue” and WiFi will stop responding until reboot (but you are still able to log into Luci via Ethernet).
I have even tried newest build based on OpenWrt 18.06 trunk (linked here: Archer C2: 2.4GHz (MT7620) support is broken)
According to information, this issue should have been fixed in 18.06 but it is still present, albeit in another form. Now kernel will log " ieee80211 phy0: rt2x00lib_rxdone_read_signal: Warning - Frame received with unrecognized signal, mode=0x0001, signal=0x010a, type=4" prior to WiFi locking instead.
So either Archer C2 2.4GHz hardware has built-in flaw or there is something wrong with WiFi driver for MT7620A
I'm using Archer C50 which also hosts the same chipset MT7620 and affected by the slow 2.4Ghz bug.
Are you able to find the solution?? 18.06 is meant to solve the issue for our devices but it didn't help.
I do not know if we are talking the same bug? 2.4GHz isn't slow. I was able to hit 7MB/s (megabytes). Key to this was enabling 40MHz channels and disabling legacy speeds.
The main problem is that if router is subjected to hi load (streaming from local NAS, for example) WiFi would freeze and stop sending packets. Device will still be connected to WiFi but there will be no traffic.
You are still able to log on via Ethernet and reboot though.
If you just use router for surfing the net via DSL, you will probably never saturate WiFi enough to knock it offline, but any "power user" will make it fold.
Hi, at least for me, selecting 40Mhz thru luci didn't work, i had to enable it by editing /etc/config/wireless manually thru ssh, and then it did use 40Mhz, even then, 2.4 Ghz seems slow compared to the stock firmware, also, selecting the minimum power did help devices use higher link speeds, especially for me given that i live on an apartment building where 2.4 Ghz Wi-Fi it's congested af!
As for the locking under heavy load, it's the same for me, and having 100Mbps of download means every time some device loads something heavy it crashes the Wi-Fi
PD: i use the C2 as a dumb AP same as Guntruck
You are right, it is impossible to force it to 40MHz. Even setting "option htmode HT40+" in etc/config/wireless does not make wireless use two channels.
MT7620 support is broken and I believe 5GHz is only way to use this router under LEDE/OpenWRT (not working under 18.06, unfortunately).
Regarding the bug: something has been done in 18.06 as router will survive "full tx queue" but will lock at "ieee80211 phy0: rt2x00lib_rxdone_read_signal: Warning - Frame received with unrecognized signal, mode=0x0001, signal=0x010d, type=4" instead :(
I just got it to lock by copying a file from NAS. Checked through SSH, CPU was hovering around 12% while copying so obviously, CPU is not the issue.
I am considering switching back to factory firmware.
Unfortunately, Factory firmare without Open VPN, ad block, transmission, Vlans, Fast samba shares & Unlimited freedom is hard to live with.
Hope somebody can fix this famous bug on MT7620 once and for all.
40MHz bug should be fixed for a while in master or 18.06 snapshots.
I am using following version and 40MHz is not functioning: OpenWrt 18.06-SNAPSHOT r6910+16-80c28554f2 / LuCI Master (git-18.143.28733-7acacf2)
It should have been fixed a long time ago
Currently using the latest snapshot of June 2nd.
If you really need to run LEDE/OpenWRT on C2, I recommend reverting back to 17.01. 5GHz works fine and without freezing. Just switch off (or rename) 2.4GHz.
Only issues are no KRACK-patches and shorter range.
OK, I just reverted back to:
LEDE Reboot 17.01-SNAPSHOT r3889-a0af7c8c59 / LuCI lede-17.01 branch git-18.098.72829-575e327), Kernel Version 4.4.126
5GHz is rock solid, and very quick with 40MHz channels. Unfortunately, LuCI does not talk to 5GHz driver well so there is basically no info on 5GHz clients. But device works perfect.
2.4GHz is as flaky as it ever was (even in 18.06).
So for now, my plan is to disable 2.4GHz and run with 5GHz on all AP's. As range is bad, I will try to configure 802.11r Fast Transition and let device roam between 5GHz islands.
Or I might just get more C2's to fill in the voids
Update: got roaming to work according to info in this Reddit post
Curiosity: I opened one of my AP's (bricked, tried to se-solder flash chip).
Archer C2 does not have dedicated 5GHz antenna! There is a place for it on PCB but no hardware (copper antenna). PCB strips leading to it do not have components. It seems that 5GHz RF feed is load-balanced into one of two 2.4GHz antennas instead, to save money!
I can confirm this still happens with 18.06.0-rc2
A 1/4 wavelength (lambda) for 2.4 GHz (desired length) is still a 1/2 wavelength (or 2/4s) for 5.8 GHz. Definetly high impedance but this isn't as bad for 20dBm (100mW) as it would be for 37dBm (5000mW) It's like "dualband antennas" for ham radios. A single/multiple of "lambda/4" is desired.
corrected antenna length for 2.4 GHz (2400000kHz): lambda/4 = ((300000 / 2400000) / 4) * .95
...which equals 2.97cm | OPCFW_CODE |
It’s like a dream you try to remember but its gone,
Then you try to scream but it only comes out as a yawn
Pinch Me, Barenaked Ladies
JavaWorld yesterday published an article by Jason Hunter, the leading authority on Java Servlets. The article provides a pragmatic look at the changes in the recently finalized Servlet 2.5 specification (JSR 154).
At first glance, the updated spec may look pretty scary. Servlets form the foundation of almost all server-side Java Web development technology, and this latest update to the spec forces developers to use Java 5.0 (Servlets 2.4 worked on Java 1.3 or later) to take advantage of new language features like annotations.
Annotations allow you to pepper your Java classes, methods and properties with labels that are compiled into the resulting class files. When these classes are loaded, these annotations can identify how these classes, methods and properties should be used by the server. For example, one of the annotations supported by the Servlet 2.5 spec lets you tag a servlet class with the security role(s) that a user must have to access it.
In his article, Jason reveals that the changes in the spec are not as drastic as they may seem. The new annotations that are supported are in place primarily for use with Enterprise JavaBeans (EJBs), and non-EJB servers are not even required to support them. In fact, EJB-level functionality aside, anything that you can do with one of the new annotations you can still do with familiar instructions in your application’s
Where annotations may impact non-EJB developers is in the domain of performance. Any Servlet 2.5 compatible server that does support annotations will have to load all of the classes in your Web application at start-up in order to process the annotations they may contain. If you don’t plan to use annotations in your application, your
web.xml file can disable annotation support by setting
full="true" in the
<web-app> tag so that your application’s classes will only be loaded by the server as needed.
The remaining significant changes in the Servlet 2.5 spec add a couple of minor convenience features to the process of mapping servlets and filters to URLs in your
web.xml file, and assist your applications in adding extentions to the HTTP protocol. Nothing worth losing sleep over, by any means.
The upshot of all this is that, from a practical standpoint, there is little reason for you to rush into moving your Java Web applications to the Servlet 2.5 specification. In fact, unless you plan to use the new EJB 3.0 standard for Enterprise JavaBeans, which relies on some of the annotation features of the spec, all Servlet 2.5 does for you is restrict you to deploying on platforms that support Java 5.0. | OPCFW_CODE |
Add utility class that forces word breaks on table cells for when users want to put tables in small things
Describe the bug
Table cells are constrained to a minimum width. The table does not auto-fit. Instead, it creates a scrollbar and it seems like the cells with longer text has already reach the minimum width.
I need the table to fit inside a small card or div in a dashboard page.
How to reproduce
https://stackblitz.com/edit/clarity-light-theme-v11-x5hujx
Resize the window to make it smaller.
Expected behavior
The whole table should fit to the parent container and cells should automatically wrap texts. It should resize cells automatically to fit in the parent container like the stackview component:
Versions
App
Angular: [e.g. 6]
Node: [e.g. 8.10.0]
Clarity: [e.g. 0.12.5]
Device:
Type: [e.g. MacBook]
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Additional notes
Add any other notes about the problem here.
@whizkidwwe1217
Hi, thanks for filing this issue. We can't make this the default for tables because there would be just as many people wanting us to un-make it the default.
The good news is that it's a fairly simple CSS fix. It can be used as a workaround in the meantime.
This could be helpful as an option to add or remove from tables where needed. And would be welcome as a contribution. I don't know if anyone on our team would be able to pick it up in the near future.
We will also need to add this to the table demos, gemini tests, and dox.
@mathisscott The CSS workaround is good enough for me, thanks! Appreciate it. It's just annoying when the table overlaps the card in the dashboard.
I would like to point out that in your example, you are comparing how the stack view wraps compared to the table. Your stack view example has actual text, with spaces between words. Your table has very long strings without any spaces. If you were to use words and spaces in your table, it would wrap correctly.
I've just discussed this with Scott, and I'm actually going to close this issue. The CSS you have to write in your app to wrap long strings like that (which is unusual) is literally just one line. If we were to add a utility class for it, you'd still have to write one line for that, but it would add maintenance, documentation and bloat the API for us. I can only see downsides to utility classes that add just one line of CSS, they offer almost no value. And the best part is that by using CSS directly, you'll know how to do it when you work with any framework in the future, whereas learning our utility class would not help you grow in any way. 😉
@youdz Thanks for the advice. I'm actually more of a back-end guy and just started learning front-end dev. And yeah, this can be closed and I'm happy with the quick fix. Cheers! (btw, the texts in my sample are actually long texts and can't be broken down into words since they are asp.net core EF migration keys. But I can always truncate it with an ellipsis as an alternative to wrapping).
| GITHUB_ARCHIVE |
You can create up to 14 levels of subgroups under one group. On any one of these levels, you can create an unlimited number of parallel subgroups. For instructions, see Create a subgroup.
For information about groups, see Groups overview.
Subgroups inherit the membership of their parent group. So, users and groups in a subgroup have the same visibility, permissions, and access to all objects as users and groups that belong to the parent group they share.
Also, a subgroup automatically inherits the group administrators of its top-level group, but you can also assign members of a subgroup to act as its group administrators.
Sometimes, you might want to use subgroups to add multiple users to an existing group in order to give them access to an object they need.
For example, suppose that you have a group of help desk technicians and a separate group of IT directors. The help desk group has permissions to a certain Request Queue. You want to add the IT directors to the help desk group so that they also have permissions to the Request Queue. Without the subgroup functionality, you would have to add the IT directors to the help desk group manually, which could be inefficient and hard to manage. If you add the IT directors group to the help desk group as a subgroup, you accomplish this task faster with just one change.
If a user was added to both a subgroup and to its parent group separately, removing the user from one doesn’t remove the user from the other. If you don’t want the user to have the access allowed for the parent group, you must remove the user both from the subgroup and from the parent group.
For a public group, any user (in or out of the group) who has edit-user access can add the group to the profile of other users. They cannot do this for a private group.
You can edit this option only on the top parent group in a hierarchy of groups that has more than one level. All subgroups of the parent group inherit its setting.
If you create a subgroup under a group that is public, the subgroup is also public, by default. For more information about creating a group and making it public, see Create a group. For more information about the access needed to edit users, see Grant access to users.
Any group you add to an existing group automatically becomes a subgroup and is no longer a main group. However, the subgroup retains its existing users, as well as any associations with projects, issues, and tasks, in addition to all project, task, and issue statuses that belong to the new parent group.
You can assign subgroup members as group administrators to the subgroup when you create it or edit it. For instructions, see in the article Create a group.
Alternately, you can leave administration of the subgroup to the group administrators who are assigned to the groups above it. When you create a subgroup, group administrators over the groups above it have automatic access to manage the subgroup.
If you add a user to a subgroup and that user is a group administrator for a group anywhere above the subgroup, that user has administrative rights to manage the subgroup, even without being assigned as a group administrator for it.
To learn which actions available for an Adobe Workfront administrator managing the Workfront system, a group administrator managing a top-level group, and a group administrator managing a subgroup, see Actions allowed for different types of administrators. | OPCFW_CODE |
The company is Ajax13, the product is ajaxWindows, and the concept is pretty straightforward: The software platform is operating system-agnostic and based on the XML User Interface Language (XUL) to act as a Web-based desktop. Files can be moved around and opened, and applications launch with a mouse click. The interface also includes customizable wallpaper, start-up and shut down sounds, and browser bookmarks. But instead of interacting with the hardware, the user stores all desktop data, documents, and content, free of charge into a Gmail account.
"The concept here is that we didn't want to determine where our registered users keep their files," Robertson told InformationWeek. "We are launching the Gmail interface but we will let people have a choice going forward." Options include other online sites or a local storage device such as a USB thumb drive, Robertson said.
The concept of using Ajax as a client occurred to Robertson as he noticed how hard it was to keep track of all the Ajax and Web 2.0 applications in the marketplace. So he helped launch Ajax13 in early 2006 as a startup that could develop a desktop interface based on Ajax principles that would serve as a repository for other Ajax applications. "What was missing from the market was the notion of the unified experience," he said.
So far, Robertson has managed to collect a fair amount of applications including an Instant Messaging client, a VoIP telephone client based on the Gizmo Project, and even Robertson own MP3 lockers and AnywhereCD application.
The ajaxWindows software is compatible with Internet Explorer and Firefox browsers. Using IE requires a small plug-in to work with Microsoft's ActiveX features and get the XUL engine up to speed.
So if MP3.com was an attempt to democratically categorize music downloads and Linspire (previously Lindows) was an attempt to free the desktop from Microsoft, who is Roberts targeting with ajaxWindows? On the consumer side, Google Pack and Microsoft's Windows Live come to mind. But if companies rally around ajaxWindows' APIs, the virtual desktop could be used in call centers, workstations and anywhere other SaaS companies like Salesforce.com are thriving.
According to Robertson, Ajax13 will decide on a revenue model (i.e. whether it'll use subscriptions, licensing, or some combo) only after it has assessed thousands of users to see what kinds of Web services are valued.
"We've been in the labs, so we haven't lifted our heads up to see what kinds of revenue streams would work." Robertson said. "We're going to watch how people are using [ajaxWindows] first... how many will use the sync application. That is the data that will let us know where the business opportunity is."
While the company is expected to make its formal debut on September 10 with the desktop client, Robertson said Ajax13's aspirations include non-PC devices such as the Nokia N800 Internet tablet, the Nintendo Wii or even Apple's iPhone. The company also expects to eventually release a set of APIs for developers who want to build applications for ajaxWindows. | OPCFW_CODE |
How many prophets wrote the Book of Mormon?
The 19 Major Book of Mormon Prophets. Krista Cook is a seventh-generation Utah Mormon and a graduate of Brigham Young University who covers LDS topics. The following chronological list only details major prophets from the Book of Mormon.
Who Wrote the Book of Mormon written?
It was first published in March 1830 by Joseph Smith as The Book of Mormon: An Account Written by the Hand of Mormon upon Plates Taken from the Plates of Nephi.
|Book of Mormon|
|Religion||Latter Day Saint movement|
How much of the Book of Mormon did Mormon write?
“We hold the Book of Mormon to be a sacred text like the Bible,” Snow said. “The printer’s manuscript is the earliest surviving copy of about 72 percent of the Book of Mormon text, as only about 28 percent of the earlier dictation copy survived decades of storage in a cornerstone in Nauvoo, Ill.”
Who was the last prophet to write in the Book of Mormon?
Moroni was the last prophet to write in the Book of Mormon.
Who do Mormons say Jesus is?
Mormons believe in Jesus Christ as the literal Son of God and Messiah, his crucifixion as a conclusion of a sin offering, and subsequent resurrection. However, Latter-day Saints (LDS) reject the ecumenical creeds and the definition of the Trinity.
What is the name of the Mormon prophet?
Russell M. Nelson is the current president and prophet of the Church. Russell M. Nelson, 17th president of The Church of Jesus Christ of Latter-day Saints.
What Bible do Mormons use?
The Holy Bible
Mormons use the Authorised King James Version of the Bible.
Do Mormons believe in Jesus?
Mormons regard Jesus Christ as the central figure of their faith, and the perfect example of how they should live their lives. Jesus Christ is the second person of the Godhead and a separate being from God the Father and the Holy Ghost. Mormons believe that: Jesus Christ is the first-born spirit child of God.
Is the Book of Mormon the same as the Bible?
Mormon writers have noted that although the portions of the Book of Mormon that quote from the Bible are very similar to the KJV text, they are not identical. Mormon scholars have also noted that at least seven of “the ancient textual variants in question are not significantly different in meaning.”
Is the Book of Mormon historically accurate?
Many members of the Latter Day Saint movement believe that the Book of Mormon is historically accurate. Most, but not all, Mormons hold the book’s connection to ancient American history as an article of their faith. … Latter Day Saints believe that Lamanites are among the ancestors of the Native Americans.
How much does it cost to make a Book of Mormon?
From the earliest origins of the Church of Jesus Christ of Latter-Day Saints, the church has made copies freely available for the cost of printing it. The versions produced and sold by the church now cost $3 for a basic softcover copy, $3.50 for a hardcover.
Are there problems with the Book of Mormon?
The content found within the book has also been questioned. Scholars have pointed out a number of anachronisms within the text, and general archaeological or genetic evidence has not supported the book’s statements about the indigenous peoples of the Americas.
How many wives can Mormons have?
It has always permitted and continues to permit men to be married in Mormon temples “for the eternities” to more than one wife. This tension between private belief and public image makes polygamy a sensitive subject for Mormons even today.
Who was the first Mormon prophet?
(December 23, 1805 – June 27, 1844) was an American religious leader and founder of Mormonism and the Latter Day Saint movement. When he was 24, Smith published the Book of Mormon.
|Joseph Smith Jr.|
|Spouse(s)||Emma Smith and multiple others (the exact number of wives is uncertain.)|
Who is Angel Moroni in the Bible?
Moroni, according to the teaching of the Church of Jesus Christ of Latter-day Saints, an angel or resurrected being who appeared to Joseph Smith on September 21, 1823, to inform him that he had been chosen to restore God’s church on earth. | OPCFW_CODE |
- August 1, 2023
In this blog post, we’ll be discussing the recent release of ‘libhijacker’ by Astrelsky, a significant dev that is one of many who is paving the way for running homebrew on our beloved PS5 consoles. This breakthrough method has been eagerly anticipated, with prominent figures like Zeco and nullptr dropping hints about its potential. Let’s dive into the details!
Exploring the libhijacker
The libhijacker, released by Astrelsky earlier this week, marks a significant milestone in our journey towards enabling homebrew functionality on the PS5. Although it’s not yet capable of running homebrew applications, this release represents a crucial initial step. Further developments are still required to reach that stage, but the exciting aspect is that it offers a temporary solution to expand the possibilities of our PS5 beyond the current firmware exploits ranging from 3.0 to 4.51.
Understanding the Method
So, how does this method work? Astrelsky himself shed some light on the matter. The libhijacker functions by writing shell code into the PS5’s Redis server, which was previously accessible via a remote connection payload. By redirecting the control flow to this shell code, a new Redis server is spawned, erasing the creation of the new process and inserting an infinite sleep loop at the entry point. This process results in the creation of a Daemon background process, constantly running as an ELF loader on Port 9027. Interestingly, this technique is also known as a ‘process hollowing’ attack, wherein an attacker replaces legitimate code with malicious code. In this case, we repurpose it to execute homebrew code, facilitating the loading of custom ELF files over the network.
Advantages over Existing ELF Loaders
One might wonder why this ELF loader is superior to the current options provided by Specter and others. The answer lies in its status as a Daemon process, which operates independently of the web kit or BDJ disc player application restrictions. This freedom enables us to read and write almost anywhere in the userland memory space, bypassing memory protections. With these capabilities, we can patch shell cores, initiate game processes, assume control before startup, and explore a wide array of possibilities.
A Sneak Peek into the Process
While we eagerly await a more comprehensive tutorial and substantial advancements in this method, let’s briefly explore how it functions. For this example, we’ll assume you have an exploit running on your PS4, such as the recommended Blu-ray drive exploit. However, it should also work with other exploits like Specter’s and Slayer Garvey’s web kit-based exploits. Here’s a simplified breakdown of the process:
1. Obtain your PS5’s IP address.
2. Run the modified Blu-ray disc with the exploit, triggering the jailbreak process on your PS4.
3. Once successful, the ELF loader will run on Port 9020.
4. On your computer, set up the PS5 proof of concept by extracting the files and placing them in a designated folder.
5. Use the spawner.elf payload to hijack the writer’s process and create a background ELF loader on Port 9027.
6. Test the method by sending a test ELF payload to the hijacker process. If it executes without errors, it indicates a successful operation.
While we’re still in the early stages of this PS5 jailbreak development, the release of the libhijacker by Astrelsky brings immense potential for future homebrew endeavors. Although it may take some time for substantial homebrew applications to materialize. Check out the below video for more info. (Credit: MODDEDWARFARE) | OPCFW_CODE |
<?php
use Illuminate\Database\Seeder;
use App\Privilege;
class PrivilegeSeeder extends Seeder
{
protected $privileges = [
[
'privilege_name' => 'Read',
'privilege_code' => 'R'
],
[
'privilege_name' => 'Write',
'Privilege_code' => 'W'
],
[
'privilege_name' => 'Modify',
'privilege_code' => 'M'
],
[
'privilege_name' => 'Delete',
'privilege_code' => 'D'
]
];
/**
* Run the database seeds.
*
* @return void
*/
public function run()
{
// Privilege::truncate();
foreach ($this->privileges as $privilege){
$pri = new Privilege($privilege);
$pri->save();
}
}
}
| STACK_EDU |
var houses = [];
var advertice_tables = [];
function Terrain() {
this.start = 0;
this.render = function() {
background("#3498db");
/*
Render road
*/
fill("#34495e");
rect(0, height-70, width, 70);
fill(255);
for (i=0;i<width+(-1*this.start);i+=100) {
rect(i+(this.start),height-45,55, 18);
}
if (frameCount % 120 == 0) {
//clouds.push(new Cloud());
for (i=1;i<=random(1,8);i++) {
houses.push(new House(i));
}
}
if (frameCount % 80 == 0) {
//clouds.push(new Cloud());
advertice_tables.push(new AdvertiseTable());
}
if (frameCount % Math.round(random(80,160)) == 0) {
items.push(new EnergyItem(width, height-100));
}
for (var i = houses.length-1; i >= 0; i--) {
houses[i].x -= scrollSpeed;
houses[i].render();
if (houses[i].offScreen()) {
houses.splice(i, 1);
}
}
for (var i = advertice_tables.length-1; i >= 0; i--) {
advertice_tables[i].x -= scrollSpeed;
advertice_tables[i].render();
if (advertice_tables[i].x + advertice_tables[i].width < 0) {
advertice_tables.splice(i, 1);
}
}
this.start-= scrollSpeed;
if (this.start < -1*width) {
this.start = 0;
}
}
} | STACK_EDU |
CDN Pro Pricing & Coverage
The next evolution of content delivery networks
CDN Pro Price Model
We organized our global Points of Presence (PoPs) into 4 server groups based on cost. From lowest to highest, the groups are:
$0.057 per GB*
$0.112 per GB*
$0.197 per GB*
$0.279 per GB*
*Additional charge of $1.95 per CPU hour usage. We charge our customers a minimum of $50 per month based on your total traffic and CPU per hour usage.
Depending on your needs, you can choose the server groups to use for your visitors according to country and ISP. This lets you maximize performance and minimize costs.
We also introduced a charge by CPU usage for the first time in the industry. This unified price model can accurately reflect the cost consumed to serve customers with different types of traffic. It also allows us to offer sophisticated edge computing features without sophisticated price matrix.
You can learn more technical details from this blog.
Assume your main objective is to ensure the performance of your website in Japan and Korea. In this scenario, you may want to configure the edge hostname to use all four server groups to serve these two countries, while using the “standard” group to serve the rest of the world. The screenshot below shows the completed configuration on the portal.
At the end of the billing cycle, the invoice shows the amount of traffic served by each server group and the CPU usage with the associated charges. We charge a minimum of $50 per month based on traffic and CPU usage.
At the end of the billing cycle, the invoice shows the amount of traffic served by each server group and the CPU usage with the associated charges with a minimum of $50.
|Server Group||Cost||Traffic||Estimated Monthly Cost|
With over 250 PoPs and terabit-level bandwidth capacity, CDN Pro peers with ISP providers worldwide, from all major networks, to deliver your content with low latency and optimal performance. CDN Pro covers the most significant areas on the planet and continues to expand the reach of its global network.
CDN Pro is dedicated to providing you with the finest self-service experience, including the ability to balance cost with performance. CDNetworks’ global points of presence (PoPs) are organized into four “server groups” based on cost.
We define different prices for traffic served from the four groups. You can choose which server groups to use for each country and ISP, letting you fully customize performance and cost for different regions in the world.
Asia & Oceania
- Ashburn, USA
- Boston, USA
- Buenos Aires, Argentina
- Chicago, USA
- Dallas, USA
- Denver, USA
- Los Angeles, USA
- Miami, USA
- Montreal, Canada
- New York City, USA
- San Jose, USA
- Sao Paulo, Brazil
- Seattle, USA
- Toronto, Canada
Asia & Oceania
Frequently Asked Questions
How Does the CDN Pro Price Model Work?
We organized our global Points of Presence (PoPs) into 4 server groups based on cost. From lowest to highest, the groups are: Standard, Premium, Deluxe, Ultra. Depending on your needs, you can choose the server groups to use for your visitors according to country and ISP. This lets you maximize performance and minimize costs.
Is there any difference between PoPs in different server groups?
For consistency, server hardware is largely the same across server groups. The main difference between different server groups is the bandwidth cost charged by our vendors (ISPs). A PoP in the “ultra” group is more expensive than one in the “standard” group, but does not necessarily deliver better performance than a “standard” PoP. Performance experienced by each end user is determined by the connectivity between the user and the PoP. Adding the “ultra” group to serve a region just provides our GSLB algorithm more PoPs to choose from, such that the end users in that region are more likely to get a PoP with better connectivity.
If I choose to use the Ultra server group for a region, will all traffic in that region be charged the Ultra price?
No. The CDN Pro GSLB assigns PoPs primarily based on performance. Adding “ultra” to serve a region enables the GSLB to choose from a larger pool of PoPs for that region. If the algorithm determines that a request will be better served by a “standard” PoP, the traffic is still served from that PoP and incurs a “standard” charge.
Are there performance guidelines for server groups that cover certain regions?
In general, the “Standard” group delivers excellent coverage for North America , Europe, and some regions in Asia. Adding the “Premium” group can enhance the performance in those areas as well as the Middle East. Adding “Deluxe” delivers good performance in APAC countries. Adding “Ultra” gives you our best possible performance in all areas including South America, Oceania and Africa. We invite you to try different configurations to determine the right balance between performance and cost.
Does CDN Pro charge for TLS certificates and HTTPS accesses?
CDN Pro does not charge for TLS certificates and HTTPS accesses. You can BYO certificate for your property domain configuration. CDN Pro also supports self-signed certificates and auto-renew through Let’s Encrypt.
How is the CDN Pro CPU Usage calculated?
CDN Pro collects CPU usage based on NGINX’s “event-driven, non-blocking” architecture and calculates the total CPU time consumed to handle a request. For more technical details about the process of collecting CPU time, refer to this blog.
To determine the amount of CPU resources required to deliver a fixed amount of data, divide the total CPU time by the traffic volume. Based on our measurements of a set of domains containing different content, a domain with highly cacheable large file content can take as little as 1.56 seconds to transfer 1GB of data, while a domain with dynamic API service content can take 453.66 seconds. The results show significant differences in CPU time by domains serving different types of content.
Based on the per-CPU hour price of $1.95, the cost of CPU consumption for the above 1GB data transfer can be calculated as follows:
For highly cacheable large file content: (1.56s/3600) * 1.95 * 100 = 0.0845 cents.
For dynamic API service content: (453.66s/3600) * 1.95 * 100 = 24.57 cents. | OPCFW_CODE |
Late last week the APC 10 x 4.7 SF pusher props showed up at the local hobby store, so I was able to finish up the quadcopter and try some first flights. I used a pair of velcro cable tie straps to hold the battery in place.
I marked the "forward" legs of the quad using white electrical tape. Hopefully this will give enough visual definition- if not I'll have to find some colored props.
|ESC's zip-tied in place|
|Velco Cable ties on the bottom for the battery|
Hobbyking Multi-Rotor control Board v2.1 Firmware Update
I was able to take off, but flight was a bit sketchy, and the platform did not seem very stable at all. I've read quite a bit on RCgroups about kapteinkuk's updated kk board firmware and decided to make the upgrade.
The instructions that hobbyking includes in their instruction manual isn't the easiest in the world to follow. Luckily Flitetest has a great video to walk you through the process.
Following their instructions, I was easily able to download and install the driver for the Hobbyking KK board programmer. The next step is to download and install the actual software for updating the board's firmware. This is where I ran into a bit of trouble. I tried downloading it several times, and even following Flitetest's instructions to the letter, I could not get the KKmulticopter flash tool software to run. After digging through the flash tool software website's help, I found an explanation about the Java version required to run the software. It turns out if you have a 64 bit flavor of windows (which most is these days), you might need to have both 32 bit AND 64 bit Java installed. By default the 32 bit is installed, but it won't run the flash tool if you have a 64 bit system. So, I downloaded and installed Java 64, and sure enough it worked. After that upgrading the firmware was a snap.
I chose to install Kapteinkuk's 4.7 x-copter software. This also required rewiring each of the motors because the rotation direction changed for each prop. A quick tip I learned is to leave the ESC leads a bit long to allow easy reversal of motor direction.
Before doing anything else, I ran out and tried to fly... I found out immediately that the yaw gyro was reversed- as soon as the landing skids left the ground the entire aircraft started spinning like a top. No good there. So I located the instructions online on gyro reversal, fixed the yaw axis, and also followed the instructions to recalibrate each ESC. After all of that, the update seems to have really helped.
Another note is that zero on the pots seems to be full CCW not CW as discussed on some locations online. Even without spending much time yet tweaking the P and I terms for pitch and roll, and the P term for yaw it is already much, much more stable than the stock firmware and enormously better than my old tricopter.
I'll be posting the terms once I figure out what works best for my setup.
I've copied some basic setup information from KapteinKuk's and other's posts on RCgroup's for easy reference:
Roll pot now controls P-term gain on roll/pitch axis.
Pitch pot now controls I-term gain on roll/pitch axis.
Yaw pot controls P-term on yaw axis as before. Yaw axis I-term is fixed at 0.2
Motor 1: front left, CW
Motor 2: back left, CCW
Motor 3: Front right, CCW
Motor 4: Back right, CW
Suggested initial setup:
P pot at 50%
I pot at 0% (it can be left at 0% since it does not have a secondary function.
Yaw P pot at 50%
Trim it level.
Adjust P (roll/pitch) to your liking.
Add I until it flies straight forward without pitching up.
;---- Gyro direction reversing ----
;---- 1: Set roll gain pot to zero.
;---- 2: Turn on flight controller.
;---- 3: LED flashes 3 times.
;---- 4: Move the stick for the gyro you want to reverse.
;---- 5: LED will blink continually.
;---- 6: Turn off flight controller.
;---- 7: If there is more gyros to be reversed, goto step 2, else set roll gain pot back.
If you move the throttle in the step 4 above, you will reverse the pot direction.
;---- ESC Throttle range calibration. This outputs collective input to all motor outputs ---
;---- This mode is entered by turning yaw gain pot to zero and turning on the flight controller. --- | OPCFW_CODE |
Are the np orbitals of light group 2 elements considered as valence electrons in basis sets?
An old mail archive says that, in the usually used basis sets, polarisation functions are treated by a single-Gaussian treatment even for valence double-zeta basis sets.
A priori, I've done a few NBO calculations on some simple magnesium compounds, say the cubic magnesium oxide tetramer anchored to the sum of ionic radii, on Gaussian.
The results seem to indicate that the 3p electrons on magnesium are treated as Rydberg, i.e. polarisation, functions on that package. This effect exists on Q-Chem as well, leading to my general conclusion that the np orbitals of lighter group 2 elements beryllium and magnesium are generally treated in the usual basis sets as polarisation functions that are not necessarily accurate and represented by a single Gaussian function, no matter the rest of the basis set's accuracy(i.e. the number of zetas).
However, there do exist circumstances where e.g. a 2p orbital of beryllium act as "legitimate" valence orbitals(such as these cases), in which case the treatment of the 2p orbitals of beryllium as Rydberg orbitals should not lead to accurate results. The last paper is behind a paywall that my institution is not affiliated with so I could not check if the authors of the article used "specialised" basis sets(no prizes for guessing out the meaning of the word "specialised" here); even if they did, these basis sets would likely be not directly accessible in the "vanilla" versions of the commonly used quantum chemistry packages and are thus hard to actually use.
My question now follows- do the basis sets included in (some or all) the "vanilla" versions of the commonly used quantum chemistry packages treat the np functions of the two lighter group 2 elements by the same way as they treat the rest of the valence(e.g. by a double-zeta for def2-SVP and by a triple-zeta for 6-311G(d))?; if not, are there any basis sets that do treat them as such available over-the-counter, should that be the right word here, in the literature?
P.S. My question also holds for the np functions of transition metals, which Gaussian treats as valence but Q-Chem treats as polarisation. To quote the quotes on Wikipedia, "Fe(−4), Ru(−4), and Os(−4) have been observed in metal-rich compounds containing octahedral complexes [MIn6−xSnx]; Pt(−3) (as a dimeric anion [Pt–Pt]6−), Cu(−2), Zn(−2), Ag(−2), Cd(−2), Au(−2), and Hg(−2) have been observed (as dimeric and monomeric anions; dimeric ions were initially reported to be [T–T]2− for Zn, Cd, Hg, but later shown to be [T–T]4− for all these elements) in La2Pt2In, La2Cu2In, Ca5Au3, Ca5Ag3, Ca5Hg3, Sr5Cd3, Ca5Zn3(structure (AE2+)5(T–T)4−T2−⋅4e−), Yb3Ag2, Ca5Au4, and Ca3Hg2; Au(–3) has been observed in ScAuSn and in other 18-electron half-Heusler compounds. See Changhoon Lee; Myung-Hwan Whangbo (2008). "Late transition metal anions acting as p-metal elements". Solid State Sciences. 10 (4): 444–449. Bibcode:2008SSSci..10..444K. doi:10.1016/j.solidstatesciences.2007.12.001. and Changhoon Lee; Myung-Hwan Whangbo; Jürgen Köhler (2010). "Analysis of Electronic Structures and Chemical Bonding of Metal-rich Compounds. 2. Presence of Dimer (T–T)4– and Isolated T2– Anions in the Polar Intermetallic Cr5B3-Type Compounds AE5T3 (AE = Ca, Sr; T = Au, Ag, Hg, Cd, Zn)". Zeitschrift für Anorganische und Allgemeine Chemie. 636 (1): 36–40. doi:10.1002/zaac.200900421.", surely some of these entities, such as Ag(-II) and Au(-II), indicate meaningful participation of the np orbitals of the transition metals silver and gold as "genuine" valence orbitals.
Some formatting may help the post readability.
It is certainly commonplace for orbitals outside the usual valence set to be included in molecular orbital calculations these days. One of the most famous examples actually involves calcium as the alkaline earth metal, with $3d$ as an "extra" subshell [1]. In the "inverse sandwich" complex $\ce{[(thf)3Ca\{μ-C6H3-1,3,5-Ph3\}Ca(thf)3]}$, calcium $3d$ orbitals with the correct symmetry are held to overlap with otherwise antibonding orbitals of the organic ligand surrounding the dicalcium core, thus stabilizing these orbitals through forming calcium-carbon bonds. This enables electron transfer into these orbitals and the resultant emergence of calcium(I) in the core.
Reference
Sven Krieck, Helmar Görls, Lian Yu, Markus Reiher, and Matthias Westerhausen (2009). "Stable 'Inverse' Sandwich Complex with Unprecedented Organocalcium(I): Crystal Structures of [(thf)2Mg(Br)-C6H2-2,4,6-Ph3] and [(thf)3Ca{μ-C6H3-1,3,5-Ph3}Ca(thf)3]".
J. Am. Chem. Soc. 131, 8, 2977–2985. https://doi.org/10.1021/ja808524y
Is the 3d orbital of calcium represented by the same number of zetas as the 4s one in the basis set used? The whole point of my question was that.
| STACK_EXCHANGE |
Slax author's Blog (RSS)
Roadmap - the future of Slax
I'll try to describe what are my further intentions with Slax operating system and the website www.slax.org. This way, you could understand what's planed next and what to wait for.
I've prepared more than 1000 build scripts which will make binary Slax Bundles from Slackware packages. Those are perfectly compatible with Slax, and are considered as trusted source. The further work requires to find out and properly fill dependencies to the buildscripts, this can be done mostly automatically, just by software. So I'm going to write this software and update my Slax buildscripts accordingly.
Automatic builds on server
All build scripts, including those I wrote, and including those which are submitted by users (using command 'slax buildscript upload'), will be kept in centrazlized database. Yet the buildscripts alone are of no practical use for the end user, as he would have to compile or build the software each time from scratch if he wants to include it in Slax. Thus, I'm already preparing a server environment where all submitted buildscripts will be processed. The environment will emulate Slax very closely (in fact it will be Slax itself running as virtual machine) and the output binary Slax Bundle of each buildscript will be created inside this virtual machine and stored on real filesystem.
Software center and Modules
As soon as the binary Slax Bundles are compiled or built in general, the software center in Slax and the Modules section at www.slax.org will be available, where users can read information about the software bundles and can download them. The software center in Slax is already prepared to offer direct activation / deactivation of the bundles.
Slax updates - incremental and small
I'm also going to release new version of Slax on a bi-weekly basis. The official release will be always the full 50GB of ISOs and ZIPs for all languages. Yet users will get an opportunity to download a diffbundle - a small Slax bundle (module) with .sb extension which will update their Slax version from X to Y. The software center in Slax can detect the version of Slax you're running and will offer you to download incremental tiny update to upgrade your running Slax to the newest version.
The section called "Requests" at www.slax.org will be something like a clone of stackoverflow. You surely know that, people will be able to submit their request for help or suggestion for Slax. Furthermore, money will be involved - when users submit their request, for example when they want some module created with their software, or when they want some question answered, they will be able to choose to either request help for free, or offer to PAY for it any amount they want. Other users who are willing to answer the given question or make a buildscript for the given module will be listed in replies. The person who asked the question will choose winner, whose reply answered his question or provided the module etc, and the winner gets the money. This all will be optional, of course.
Requests section - Slax improvements
There are many things I am unable to resolve in Slax just myself. The very same Requests section will be used by myself to list current Slax issues, with price offer. Other users will be able to resolve those issues instead of me and get paid for it.
As always, the documentation at www.slax.org needs to be updated in order to help people better understand both Slax and the website. I'll have to divide it into several sections to make it easier to navigate.
This is basically the roadmap for the next month. Feel free to comment or ask. Your suggestions are always welcome. | OPCFW_CODE |
import {
Token,
MaybeAccount,
MaybeCurrency,
ObOrPromiseResult,
forceToTokenSymbolCurrencyId,
forceToCurrencyIdName,
getLPCurrenciesFormName,
isDexShare,
FixedPointNumber
} from '@acala-network/sdk-core';
import { CurrencyId } from '@acala-network/types/interfaces';
import { ApiRx, ApiPromise } from '@polkadot/api';
import { getExistentialDeposit } from './existential-deposit';
import { BalanceData, PriceData, PriceDataWithTimestamp, TransferConfig } from './types';
export abstract class WalletBase<T extends ApiRx | ApiPromise> {
protected api: T;
protected decimalMap: Map<string, number>;
protected currencyIdMap: Map<string, CurrencyId>;
protected tokenMap: Map<string, Token>;
protected nativeToken!: string;
protected runtimeChain!: string;
protected constructor(api: T) {
this.api = api;
this.decimalMap = new Map<string, number>([]);
this.currencyIdMap = new Map<string, CurrencyId>([]);
this.tokenMap = new Map<string, Token>([]);
this.init();
}
private init() {
const tokenDecimals = this.api.registry.chainDecimals;
const tokenSymbol = this.api.registry.chainTokens;
const defaultTokenDecimal = Number(tokenDecimals?.[0]) || 12;
this.runtimeChain = this.api.runtimeChain.toString();
this.nativeToken = tokenSymbol[0].toString();
tokenSymbol.forEach((item, index) => {
const key = item.toString();
const currencyId = forceToTokenSymbolCurrencyId(this.api, key);
const decimal = Number(tokenDecimals?.[index]) || defaultTokenDecimal;
this.decimalMap.set(key, Number(tokenDecimals?.[index]) || defaultTokenDecimal);
this.currencyIdMap.set(key, currencyId);
this.tokenMap.set(key, Token.fromCurrencyId(currencyId, decimal));
});
}
/**
* @name getAllTokens
* @description get all available currencies
*/
public getAllTokens(): Token[] {
return Array.from(this.tokenMap.values()).map((item) => item.clone());
}
public getNativeToken(): Token {
const nativeCurrencyId = this.api.consts.currencies.getNativeCurrencyId;
return this.getToken(nativeCurrencyId).clone();
}
/**
* @name getToken
* @description get the currency
*/
public getToken(currency: MaybeCurrency): Token {
const currencyName = forceToCurrencyIdName(currency);
if (isDexShare(currencyName)) {
const [token1, token2] = getLPCurrenciesFormName(currencyName);
const _token1 = this.getToken(token1);
const _token2 = this.getToken(token2);
return Token.fromTokens(_token1, _token2);
}
// FIXME: need handle erc20
return this.tokenMap.get(currencyName)?.clone() || new Token('EMPTY');
}
public getTransferConfig(currency: MaybeCurrency): TransferConfig {
const name = forceToCurrencyIdName(currency);
if (isDexShare(name)) {
const [token1] = Token.sortTokenNames(...getLPCurrenciesFormName(name));
return {
existentialDeposit: getExistentialDeposit(this.runtimeChain, token1)
};
}
const existentialDeposit = getExistentialDeposit(this.runtimeChain, forceToCurrencyIdName(currency));
return { existentialDeposit };
}
/**
* @name checkTransfer
* @description check transfer amount to target account is ok or not
*/
abstract checkTransfer(
account: MaybeAccount,
currency: MaybeCurrency,
amount: FixedPointNumber,
direction?: 'from' | 'to'
): ObOrPromiseResult<T, boolean>;
/**
* @name queryBalance
* @description get the balance of the currency
*/
abstract queryBalance(account: MaybeAccount, currency: MaybeCurrency, at?: number): ObOrPromiseResult<T, BalanceData>;
/**
* @name queryPrices
* @description get prices of tokens
*/
abstract queryPrices(tokens: MaybeCurrency[], at?: number): ObOrPromiseResult<T, PriceData[]>;
/**
* @name queryPrice
* @description get the price
*/
abstract queryPrice(currency: MaybeCurrency, at?: number): ObOrPromiseResult<T, PriceData>;
/**
* @name queryOraclePrice
* @description get the oracle feed price
*/
abstract queryOraclePrice(): ObOrPromiseResult<T, PriceDataWithTimestamp[]>;
/**
* @name queryLiquidPriceFromStakingPool
* @description get the oracle feed price
*/
abstract queryLiquidPriceFromStakingPool(at?: number): ObOrPromiseResult<T, PriceData>;
/**
* @name queryPriceFromDex
* @description get the oracle feed price
*/
abstract queryPriceFromDex(currency: MaybeCurrency, at?: number): ObOrPromiseResult<T, PriceData>;
/**
* @name queryDexSharePriceFormDex
* @description get the oracle feed price
*/
abstract queryDexSharePriceFormDex(currency: MaybeCurrency, at?: number): ObOrPromiseResult<T, PriceData>;
/**
* @name subscribeOracleFeeds
*/
abstract subscribeOracleFeed(provider: string): ObOrPromiseResult<T, PriceDataWithTimestamp[]>;
}
| STACK_EDU |
Table of Contents
How to set up a cold wallet
What is a cold wallet and why do I need it?
A cold wallet is the safest way for most individuals to store their cryptocurrencies. The idea of a cold wallet, is that you store your coins in a place that is not connected to the internet. This makes it different from a hot wallet, which is instantly accessible and in sync with the blockchain.
A simple cold wallet
- Download the latest Chia client installer from the Official Chia Network Github on any computer with an internet connection;
- Copy the file ChiaSetup-
x.x.xto a safe computer with a known clean OS and no internet connection using a clean USB;
- Install the client on the new PC
- Make sure that no-one can see your screen. Create a new private key. You will be presented with 24 words, which is called your mnemonic seed phrase. You don't need to copy them yet.
- Go to the
WalletTab in the GUI and copy the receive address. At this point, you need to decide if you want to have one public address associated with your cold wallet, or if you want multiple. If the latter is the case, generate another / multiple address(es) by selecting
New Addressand write these addresses down in a Wordfile or Notepad. These public addresses can be shared without compromising your security (hence “public”).
- Click the
Logoutbutton on the top right, which will bring you back to the page where you created your key.
- make sure no-one can see your screen. Click the eyeball icon which says “See private keys”. Copy this information carefully. The 24-words are needed to later restore your wallet, so copy them carefully and in numbered order! Most people copy it using notepad to a clean USB stick, but you can also write it on paper or etch it in wood or steel to make it more durable.
- Make sure to store it in a safe place; if you lose it, you will never be able to retrieve anything you receive on your cold wallet(s)!
- After you are done and checked everything, delete the 24 words from the computed by clicking the bin icon. Then uninstall Chia and remove all related folders.
Future of Chia wallets
There is currently no support for hardware wallets like Trezor and Ledger. There is a possibility that Chia will be secured using yubi-keys, as Chia aims to develop the ease and security of wallets further than any other blockchain currently.
Multisignature wallets (or multisig, for short), are cryptocurrency wallets that require two or more private keys to sign and send a transaction. The storage method requires multiple cryptographic signatures (a private key’s unique fingerprint) to access the wallet. The most notable disadvantage to these type of wallets, is that all parties need to be aligned when tranfering founds out of the wallet.
From Cold to Hot wallet
If you want to access your funds again in the future, you can make your wallet “hot” again by installing the Chia Client on a clean machine with an internet connection. Import from 24-word mnemonic and let your wallet sync up for a few minutes. | OPCFW_CODE |
Table of Contents
Word Send to Kindle Option Makes Document Transfer Easy
As you might know, the Office 365 for IT Pros eBook is available for Amazon Kindle. We don’t sell many copies on Kindle. The price is the same (a regulation imposed by Amazon), but it’s easier to download updates for the EPUB/PDF version. Amazon’s publishing mechanisms are built for novels that don’t change often. They don’t cope well with a book like Office 365 for IT Pros when updates appear monthly. Another fact is that it’s possible to transfer the EPUB file to a Kindle, meaning that people who subscribe to the EPUB/PDF version get all the benefits of easy updates while being able to access the content on Kindle when needed.
In any case, since 2016 we have accumulated lots of experience dealing with the Kindle model as we publish monthly updates. Our preferred tool is Calibre eBook management, which does a nice job of turning Word documents into EPUB format. We then update the EPUB file to Amazon’s Kindle publishing platform to generate a file that Amazon publishes in its store.
Sending Word Documents to Kindle
All of which means that Microsoft’s announcement about a new Send to Kindle feature in Word in MC519245 (last updated 21 Mar 2023, Microsoft 365 roadmap item 117542) attracted my attention. The plan is to make the feature available in Word desktop for Windows and Mac (subscription version) and Word Online. The documentation says that the web version is “coming.” It is available in the Current Channel (Preview) of the Windows app (Figure 1). I tested the feature using version 2304 (build 16327.20200).
Transferring Word Documents to an Amazon Account
To send documents to Kindle, you must have an Amazon account that’s linked to a Kindle device. Documents sent to Kindle become available for download to any device registered to the Amazon account. Dating from 2011, my Kindle is antique at this stage. However, if documents sent from Word worked on this device, they will work on any Kindle.
When you send a document, you sign into the Amazon account and decide which of two formats to use (Figure 2):
Here’s how Microsoft’s support documentation describes the two options:
Kindle book: This formatting style enables adjustable font sizes and page layouts. It also supports handwritten sticky notes with Kindle Scribe. It works well for storing documents with simple text formatting for better readability on smaller screens.
Word document format: This formatting style preserves the page layouts and text formatting of your Word document. Your content will display in Kindle as it would appear when printed (except tracked changes and comments, which will not appear).
After selecting the format to use, Word sends the document to an Amazon service to prepare the content for viewing on Kindle.
Reading Word Documents on Kindle
After a while, the file synchronizes with the Kindle and is available for reading. Testing with a few trial documents worked well, and then I decided to send the full current version of Office 365 for IT Pros (Figure 3). The source Word document is a 33.1MB file spanning 1,380 pages complete with many tables, embedded web links, graphics, and a table of contents. We do not use footnotes. Interestingly, selecting the Kindle format created a 33.3MB file, very close to the size of the Word document, while the Word format (like a printed document) option generated a 28.4MB PDF file.
I first tried the Kindle book format. This worked except for graphics. Everything else was fine, including the formatting of PowerShell code examples. As expected, the formatted PDF file looks like a printed document and preserves graphics and other formatting. It’s been possible to transfer PDFs to Kindle for several years and it appears that Word uses a modified version of these techniques to convert to PDF and copy the file to Amazon.
Word Send to Kindle is Simple Inbuilt Transfer
The value of Word’s Send documents to Kindle feature is that it’s built into the app and makes it easier for people to transfer documents to Amazon for synchronization to their Kindle devices. There outcome is no better than with previous methods, but the simplicity of the operation and reduced friction is welcome.
Stay updated with developments across the Microsoft 365 ecosystem by subscribing to the Office 365 for IT Pros eBook. We do the research to make sure that our readers understand the technology, even if we decide not to mention features like Word Send to Kindle. | OPCFW_CODE |
About this project
Update: So many people have asked: we ARE doing an iOS version, having passed the threshold weeks ago!
A spectrometer may not sound like what you wanted for your birthday, but it's a ubiquitous tool for scientists to identify unknown materials, like oil spill residue or coal tar in urban waterways. But they cost thousands of dollars and are hard to use -- so we've designed our own.
This open hardware kit costs only $35, but has a range of more than 400-900 nanometers, and a resolution of as high as 3 nm. A spectrometer is essentially a tool to measure the colors absorbed by a material. You can construct this one yourself from a piece of a DVD-R, black paper, a conduit box, and an HD USB webcam.
We've also created open source software (spectralworkbench.org) to collect, analyze, compare, and share calibrated spectral data. We've even made an experimental version which converts your cellphone into a spectrometer (see rewards -- now with iOS in addition to Android)!
Public Lab community members have used this new tool to identify dyes in "free and clear" laundry detergent, to test grow lamps, and to analyze wines.
Now we need your help in collecting data to build a Wikipedia-style library of open source spectra, and to refine and improve sample collection and analysis techniques. We imagine a kind of "SHAZAM for materials" which can help to investigate chemical spills, diagnose crop diseases, identify contaminants in household products, and even analyze olive oil, coffee, and homebrew beer.
Public Lab is an open community (join now!) which investigates environmental issues with DIY tools. You might have heard about our first big project to document the BP oil spill using aerial photos from kites and balloons and our balloon mapping kits Kickstarter. Since then we've been working on new ways to ID contamination on the cheap. We hope you'll join us in taking the next step!
The mobile phone spectrometers and the $35 "desktop kit" have no built-in light source, or stand, so you'll have to get your own (hardware store! Or use a tabletop microphone stand!). This is partially because it's adaptable for reflectance, transmissive, or fluorescence spectroscopy, which some folks have been asking about (we'll post more on this soon!). So you might want to use it with a portable light, sunlight (http://spectralworkbench.org/tag/sunlight), a UV light (http://spectralworkbench.org/sets/show/15) or a laser (http://publiclaboratory.org/notes/warren/7-26-2012/oil-residue-preparation-spectroscopy).
A lot of people are asking exactly what you can do with your spectrometer. First of all, check http://SpectralWorkbench.org for what people have already done. Second, exploring these questions is why we're launching this project -- we're hoping you all will help explore new uses, refine and improve how it's used in open source style. So the answer is -- maybe! Probably! But the point is that you can use it to investigate, demonstrate, prove or disprove exactly that question! Just be sure to share what you find with everyone else -- there are already active discussion underway at http://PublicLaboratory.org's mailing list.
Also! We've created a wiki page at the Public Lab site to collect, share, critique, and test different applications. So check it out and contribute what you know: http://publiclaboratory.org/wiki/spectral-analysis
Basically the desktop version is JUST a spectrometer, and it's a DIY kit; you have to assemble everything yourself. The countertop model is mostly assembled, precalibrated, and comes with a dimmable light source and a sample dish, and has a stand. Both come with the same HD USB camera.
What's the difference between the papercraft spectrometer and the $65 "backpack" one? And what the heck is a "backpack" spectrometer?
The "backpack" model -- which clips to your mobile phone like a tiny "backpack" (some people thought it was as big as a backpack!) -- is going to be a rigid design which is durable enough to take outside and do fieldwork with. We're going to try to get it injection molded or 3d printed. By contrast, the fold-up spectrometer will definitely work, but probably won't be durable enough that you can throw it in the bottom of your backpack and go on a hike. A prototype of the "backpack" model is the lead image, above -- see, it's small! You'll be able to adhere it to your phone, or to a rigid phone case if you don't want to ruin your phone :-) Both mobile versions will be limited to visible light -- ~400-700 nanometers, unless of course you're willing to open up and remove the filter from your phone's camera!
Support this project
- (39 days) | OPCFW_CODE |
Can't find component on template refresh when importing component in script setup
Describe the bug
First of all I'm not sure if that's a vite or a vue 3 issue. I'm sorry if it's the wrong repo.
So the issue is the next, when I save a file with components imported inside a <script setup> tag, like so:
<script setup>
export { default as Card } from './Card.vue'
export { default as Navbar } from '/src/components/Navbar.vue'
</script>
I get those errors:
[Vue warn]: Failed to resolve component: Navbar
at <Index>
at <App>
warn @ vue.js:1137
resolveAsset @ vue.js:2505
resolveComponent @ vue.js:2463
render @ Index.vue:55
renderComponentRoot @ vue.js:1634
componentEffect @ vue.js:5415
reactiveEffect @ vue.js:330
(anonymous) @ vue.js:1548
rerender @ vue.js:1541
(anonymous) @ vue.js:1598
(anonymous) @ client:60
Promise.then (async)
handleMessage @ client:59
(anonymous) @ client:40
vue.js:1137 [Vue warn]: Failed to resolve component: Card
at <Index>
at <App>
But if I use the good old technique:
<script>
import Card from './Card.vue'
import Navbar from '/src/components/Navbar.vue'
export default {
components: {
Card,
Navbar
}
}
</script>
Everything is fine.
The errors are here only when it's a template refresh like so: [vite:hmr] src/pages/Home/Index.vue updated. (template)
But if it's a full reload it finds the components successfully.
Reproduction
Do you need one ?
System Info
required vite version: Latest
required Operating System: Mac OS 11
required Node version: Latest
Logs (Optional if provided reproduction)
Works:
[vite:hmr] src/pages/Home/Index.vue updated. (reload)
vite:sfc /Users/k/code/vue/alfred/src/pages/Home/Index.vue parse cache hit +7ms
vite:rewrite /src/pages/Home/Index.vue: rewriting +14ms
vite:rewrite "./Card.vue" --> "/src/pages/Home/Card.vue" +0ms
vite:hmr /src/pages/Home/Index.vue imports /src/pages/Home/Card.vue +8ms
vite:hmr /src/pages/Home/Index.vue imports /src/components/Navbar.vue +0ms
vite:rewrite "/src/pages/Home/Index.vue?type=template" --> "/src/pages/Home/Index.vue?type=template&t=1598453581887" +3ms
vite:sfc /Users/k/code/vue/alfred/src/pages/Home/Index.vue parse cache hit +6ms
vite:rewrite /src/pages/Home/Index.vue: rewriting +1ms
vite:rewrite "./Card.vue" --> "/src/pages/Home/Card.vue" +0ms
vite:hmr /src/pages/Home/Index.vue imports /src/pages/Home/Card.vue +4ms
vite:hmr /src/pages/Home/Index.vue imports /src/components/Navbar.vue +0ms
vite:rewrite "/src/pages/Home/Index.vue?type=template" --> "/src/pages/Home/Index.vue?type=template&t=1598453581887" +0ms
vite:sfc /Users/k/code/vue/alfred/src/pages/Home/Index.vue parse cache hit +4ms
vite:sfc /src/pages/Home/Index.vue template compiled in 7ms. +7ms
vite:rewrite /src/pages/Home/Index.vue?type=template: rewriting +12ms
vite:rewrite "vue.js" --><EMAIL_ADDRESS>+0ms
vite:hmr /src/pages/Home/Index.vue?type=template imports<EMAIL_ADDRESS>+12ms
vite:sfc /Users/k/code/vue/alfred/src/pages/Home/Index.vue parse cache hit +2ms
vite:sfc /src/pages/Home/Index.vue template cache hit +0ms
vite:rewrite /src/pages/Home/Index.vue?type=template: rewriting +2ms
vite:rewrite "vue.js" --><EMAIL_ADDRESS>+0ms
vite:hmr /src/pages/Home/Index.vue?type=template imports<EMAIL_ADDRESS>+2ms
vite:hmr busting Vue cache for /Users/k/code/vue/alfred/src/pages/Home/Index.vue +3s
vite:rewrite /src/pages/Home/Index.vue: cache busted +3s
vite:sfc /Users/k/code/vue/alfred/src/pages/Home/Index.vue parsed in 8ms. +3s
vite:hmr update: {
vite:hmr "type": "vue-rerender",
vite:hmr "path": "/src/pages/Home/Index.vue",
vite:hmr "changeSrcPath": "/src/pages/Home/Index.vue",
vite:hmr "timestamp":<PHONE_NUMBER>472
vite:hmr } +9ms
Fails:
[vite:hmr] src/pages/Home/Index.vue updated. (template)
vite:sfc /Users/k/code/vue/alfred/src/pages/Home/Index.vue parse cache hit +4ms
vite:sfc /src/pages/Home/Index.vue template compiled in 13ms. +14ms
vite:rewrite /src/pages/Home/Index.vue?type=template: rewriting +27ms
vite:rewrite "vue.js" --><EMAIL_ADDRESS>+0ms
vite:hmr /src/pages/Home/Index.vue?type=template imports<EMAIL_ADDRESS>+19ms
vite:sfc /Users/k/code/vue/alfred/src/pages/Home/Index.vue parse cache hit +3ms
vite:sfc /src/pages/Home/Index.vue template cache hit +0ms
vite:rewrite /src/pages/Home/Index.vue?type=template: rewriting +3ms
vite:rewrite "vue.js" --><EMAIL_ADDRESS>+0ms
vite:hmr /src/pages/Home/Index.vue?type=template imports<EMAIL_ADDRESS>+3ms
Look like it caused by export { default as Navbar } from '/src/components/Navbar.vue'.I'm not sure how happend with this.Can you provider one repo for this?
Please check out this section in the RFC: https://github.com/vuejs/rfcs/blob/sfc-improvements/active-rfcs/0000-sfc-script-setup.md#exposing-components
Sure @underfin here it is: https://github.com/gawlk/vite-component-import-in-setup-bug-demo
Not reproduction for this.Look like it is same with https://github.com/vitejs/vite/issues/610
Confirmed as a bug related to hmr or the compiler
On the first build, it resolved the component currently
import { createVNode as _createVNode, Fragment as _Fragment, openBlock as _openBlock, createBlock as _createBlock } from<EMAIL_ADDRESS>
const _hoisted_1 = /*#__PURE__*/_createVNode("img", {
alt: "Vue logo",
src: "/src/assets/logo.png"
}, null, -1 /* HOISTED */)
export function render(_ctx, _cache, $props, $setup, $data, $options) {
return (_openBlock(), _createBlock(_Fragment, null, [
_hoisted_1,
_createVNode($setup["HelloWorld"], { msg: "Hello Vue 3.0 + Vite" })
], 64 /* STABLE_FRAGMENT */))
}
And when the template get recompiled, it lost the info knowing HelloWorld is a component from the context, and compiled to
import { createVNode as _createVNode, resolveComponent as _resolveComponent, Fragment as _Fragment, openBlock as _openBlock, createBlock as _createBlock } from<EMAIL_ADDRESS>
const _hoisted_1 = /*#__PURE__*/_createVNode("img", {
alt: "Vue logo",
src: "/src/assets/logo.png"
}, null, -1 /* HOISTED */)
export function render(_ctx, _cache) {
const _component_HelloWorld = _resolveComponent("HelloWorld")
return (_openBlock(), _createBlock(_Fragment, null, [
_hoisted_1,
_createVNode(_component_HelloWorld, { msg: "Hello Vue 3.0 + Vite 2" })
], 64 /* STABLE_FRAGMENT */))
}
| GITHUB_ARCHIVE |
SPI broken on stm32f412 on master
I pulled the latest master and the SPI stopped working.
My config:
CONFIG_SPI=y
CONFIG_SPI_1=y
CONFIG_SPI_1_OP_MODES=1
CONFIG_SPI_3=y
CONFIG_SPI_3_OP_MODES=1
CONFIG_SPI_STM32_INTERRUPT=y
but it does not matter if we are using interrupts or not.
the code:
static sst26vf016b_error_t sst26vf016b_wakeup(void) {
LOG_DBG("wakeup");
u8_t buffer_tx[] = {sst26vf016b_op_RDPD};
u8_t buffer_rx[1];
struct spi_buf tx_buf [] = {
{
.buf = buffer_tx,
.len = sizeof(buffer_tx)/sizeof(u8_t)
}
};
const struct spi_buf_set tx = {
.buffers = tx_buf,
.count = sizeof(tx_buf)/sizeof(struct spi_buf)
};
const struct spi_buf rx_buf []= {
{
.buf = NULL,
.len = 4,
},
{
.buf = buffer_rx,
.len = sizeof(buffer_rx),
}
};
const struct spi_buf_set rx = {
.buffers = rx_buf,
.count = sizeof(rx_buf)/sizeof(struct spi_buf)
};
if (spi_transceive(sst26vf016b.spiDev, &sst26vf016b.spi_conf, &tx, &rx)) {
return sst26vf016b_error_SPI;
}
if (SST26VF016B_JEDEC_DeviceID != buffer_rx[0]) {
LOG_ERR("wrong device id! current DeviceID=0x%X vs DeviceID=0x%X",
buffer_rx[0],
SST26VF016B_JEDEC_DeviceID
);
return sst26vf016b_error_wakeup_wrongid;
}
k_sleep(K_MSEC(10)); // 10 msec till the device has left deep sleep
return sst26vf016b_error_none;
}
My log print:
[00:00:01.421,000] <dbg> spi_ll_stm32.spi_stm32_configure: Installed config 0x20018ba4: freq 12000000Hz (div = 2), mode 0/0/0, slave 0
[00:00:01.434,000] <dbg> spi_ll_stm32.spi_context_buffers_setup: tx_bufs 0x200179a8 - rx_bufs 0x20017990 - 1
[00:00:01.444,000] <dbg> spi_ll_stm32.spi_context_buffers_setup: current_tx 0x200179b0 (1), current_rx 0x20017998 (2), tx buf/len 0x200179bc/1, rx buf/len 0x00000000/4
[00:00:01.459,000] <dbg> spi_ll_stm32.spi_context_update_tx: tx buf/len 0x00000000/0
on the logic analyzer:
Some comment:
1.) I checked the SPI coms via an older firmware that only uses STM32HAL and it works there. So we can say that the (flash) IC is not broken.
2.) there is no timeout, so the firmware breaks at https://github.com/zephyrproject-rtos/zephyr/blob/master/drivers/spi/spi_ll_stm32.c#L495 while waiting
3.) looks for me that the tx is send, but it somehow messes up while rx. There were some changes and maybe the rx_buf setup is no invaild
I forgot the old version for comparison:
*** Booting Zephyr OS build zephyr-v2.1.0-383-g17d066b9dc0e ***
the latest version:
*** Booting Zephyr OS build zephyr-v2.1.0-634-gbd9962d8d98c ***
And a print out for the old SPI com:
I experienced the same issue with the effect that the SPI_NOR driver did not initialize anymore (it receives the wrong device id).
By trial and error I could locate the offending commit being 2ce8fa1e42d3d60e4119f15e82c29d2daaf67c00 but I still need to understand how.
@erwango I put your changes in but it is still broken
I attached a logic analser shot in.
old working version:
latest fix:
for me it looks that the rx never stops (spi_context_wait_for_completion does not return). That is why the CS does not go high
| GITHUB_ARCHIVE |
Unstoppable Blockchain DApps
This past week was a revelation after working for 3.5 years in the blockchain space, after listening to Dr. Stephan Karpischek’s Keynote speech on “Decentralizing Insurance” at the PreICIS SIGBPS 2020 workshop.
Stephan’s definition of a blockchain based software system appealed to me as the right way to create software systems and development ecosystems.
Software LIFE Cycle and Stoppable Software Systems
A software system has a life cycle which starts with a team building it. Often the team that builds this system is centralized heavily on three types of resources: i.e., (1) human capital provided by the management of the firm that owns the software (often licenses it and the source code), (2) the hardware needed to run the software( often single server systems installed and run on a set of nodes in single data-centers) and (3) software tools available to the developers and adopters (by means of developers invested in the system).
Each of these three components necessary to create an maintain software applications have multiple points of failure, both technical, economical and human resource related leading to a process known as “End of Life” for the corresponding software. For example, management of firms that create these software projects can often shut down these projects and relegate them to obscurity. The hardware and software on these systems can become outdated often and frequently.
UNSTOPPABILITY – A PROPERTY OF BLOCKCHAIN DAPPS
When all the 3 aspects of a software system that are (often) centralized disappear, then we have a true software system that is unstoppable. Public decentralized blockchains widely adopted makes this happen. For example, on the Ethereum Blockchain which is a global network with nodes around the world and hosted by different individuals, decentralized applications have the requisite hardware and software to live on for ever. Similarly, when teams developing the software is distributed globally and there is no single organization determining what can/what can’t be done, and governance of software is accomplished through governance models that are public, transparent and open to all, such a system becomes unstoppable.
Such unstoppable systems cannot be regulated, pulled down or forced to abort unless the entire network of computing nodes are stopped. For example, when one country regulates access to these network of nodes, other countries which provide free access to computing resources will provide environments for this innovation to thrive. Similarly, when investors decide to impose regulations such as geographic blocking on the corresponding software, the entire source code of the software can suddenly be forked and start to execute on nodes without the geo-blocking feature.
Ethereum nodes distributed globally
Source – https://etherscan.io/nodetracker
This in my opinion is the most important and critical aspect of Decentralized applications that Software development firms have to pay attention to.
Such a one Blockchain DApps is –Uniswap | OPCFW_CODE |
ld: library not found for -lFBSDKCoreKit
When I compile with XCode 7.3.1 I get the following error:
ld: warning: directory not found for option '-L/Users/Dave/Library/Developer/Xcode/DerivedData/Build/Products/Debug-iphoneos/FBSDKCoreKit'
ld: warning: directory not found for option '-L/Users/Dave/Library/Developer/Xcode/DerivedData/Build/Products/Debug-iphoneos/FBSDKLoginKit'
ld: library not found for -lFBSDKCoreKit
Using Cocoapods 1.0.1with this podfile
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '7.0'
target 'myapp' do
pod 'Google-Mobile-Ads-SDK'
pod 'Google/Analytics'
pod 'Firebase'
pod 'Firebase/Auth'
pod 'FirebaseUI', '~> 0.4'
end
Using a pod update:
pod update
Update all pods
Re-creating CocoaPods due to major version update.
Updating local specs repositories
Analyzing dependencies
Downloading dependencies
Installing Bolts (1.7.0)
Installing FBSDKCoreKit (4.13.1)
Installing FBSDKLoginKit (4.13.1)
Installing Firebase (3.3.0)
Installing FirebaseAnalytics (3.2.1)
Installing FirebaseAuth (3.0.3)
Installing FirebaseDatabase (3.0.2)
Installing FirebaseInstanceID (1.0.7)
Installing FirebaseUI (0.4.0)
Installing Google (3.0.3)
Installing Google-Mobile-Ads-SDK (7.8.1)
Installing GoogleAnalytics (3.14.0)
Installing GoogleAppUtilities (1.1.1)
Installing GoogleAuthUtilities (2.0.1)
Installing GoogleInterchangeUtilities (1.2.1)
Installing GoogleNetworkingUtilities (1.2.1)
Installing GoogleParsingUtilities (1.1.1)
Installing GoogleSignIn (4.0.0)
Installing GoogleSymbolUtilities (1.1.1)
Installing GoogleUtilities (1.3.1)
Generating Pods project
Integrating client project
Sending stats
Pod installation complete! There are 5 dependencies from the Podfile and 20
total pods installed.
[!] Unable to read the license file /Users/Dave/Documents/KSS/myapp/Pods/FirebaseUI/LICENSE for the spec FirebaseUI (0.4.0)
[!] Unable to read the license file /Users/Dave/Documents/KSS/myapp/Pods/FirebaseUI/LICENSE for the spec FirebaseUI (0.4.0)
Strange--it looks like you're getting FBSDKCoreKit downloaded, but for some reason your project can't link the framework. Are you using Swift by chance?
No - Objective-C.
Things should definitely work in Objective-C. Maybe Facebook only supports iOS 8 and above because they went to dynamic frameworks in CocoaPods (pure wild speculation). If you target 8+ and use use_frameworks! in the podfile does it work?
Thanks for the follow up. I could not figure out / tinker with this problem so I created a whole new project, copied all my code and image assets to the new project, created new podfile and it is working. So it appears as though it was some setting in the project that was creating the problem. Unfortunately I don’t have the time to understand specifically what the problem is. I need to move on at this point.
On Jul 12, 2016, at 5:01 AM, Michael McDonald<EMAIL_ADDRESS>wrote:
Things should definitely work in Objective-C. Maybe Facebook only supports iOS 8 and above because they went to dynamic frameworks in CocoaPods (pure wild speculation). If you target 8+ and use use_frameworks! in the podfile does it work?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/firebase/FirebaseUI-iOS/issues/76#issuecomment-231994781, or mute the thread https://github.com/notifications/unsubscribe/AI8vASnvDRhWMpmSwx18IVi-zX5YIbwMks5qU2YOgaJpZM4JDbtx.
I usually run pod deintegrate, clean, and re-build--but if some random xcconfig was broken then that's usually a good plan :)
Glad you got it working again!
| GITHUB_ARCHIVE |
Once enabled, Federated Identity Support allows user accounts to use credentials established by a federated trust relationship through Active Directory Federation Services (AD FS) as a basis for obtaining a rights account certificate (RAC) from an AD RMS cluster. This is an alternative to setting up trusted publishing domains or trusted user domains between entities that have previously established trust infrastructures, such that in most cases the cluster is supporting both users that are inside of the organization and users from a partner organization.
When rights account certificates (RACs) are issued from a federated identity, the standard rights account certificate validity period does not apply. Instead, the RAC validity period is specified in the Federated Identity Support setting. Users with federated identities do not use temporary rights account certificates.
By default, federated trust relationships are not transitive. When a federated trust relationship is established between two organizations, any AD RMS trusted user domains that are established in either organization are not automatically trusted by the other organization. However, when you are importing a Trusted User Domain, there is an option to trust federated users of the imported domain.
Great care should be taken when allowing proxy addresses through a federated trust. If you allow proxy addresses through federation, it is possible for a malicious user to spoof an authorized user's credentials and access the user's rights-protected content. If proxy addresses through federation is a requirement of your organization, you should implement a claims transformation module that will examine a proxy address from a federated user and make sure that it matches the forest in which the request originated. The option to allow a proxy address from a federated user is turned off by default in the Active Directory Rights Management Services console.
Membership in the local AD RMS Enterprise Administrators, or equivalent, is the minimum required to complete this procedure.
|To enable and configure federated identity support settings|
Open the Active Directory Rights Management Services console and expand the AD RMS cluster.
In the console tree, expand Trust Policies, and then click Federated Identity Support.
In the Actions pane, click Enable Federated Identity Support to enable Federated Identity Support.
In the Actions pane, click Properties.
On the Active Directory Federation Service Policies tab, in Federated Identity Certificate validity period, type the number of days that federated rights account certificates are to be valid.
In Federated Identity Certificate Service URL, provide the location of the root cluster that will provide RACs to external users. If the default is selected, users will attempt to obtain a RAC from the AD RMS cluster that published the content.
- You can also perform the task described in
this procedure by using Windows PowerShell. For more information
about Windows PowerShell for AD RMS, see http://go.microsoft.com/fwlink/?LinkId=136806. | OPCFW_CODE |
With Office 365, Microsoft‘s online portal of products, a few new programs have been quietly introduced. One such program is Sway. In exploring this PowerPoint alternative, it becomes abundantly clear how the possibilities for its application are quite limitless.
So what exactly is Sway?
Sway would be best described as a cross between a PowerPoint and a streamlined website. It organizes information into “cards” which are similar to PowerPoint’s slides. These cards, however, can be used to show text, images, embedded information, videos, maps, and a multitude of other features.
How can Sway be used?
In looking at Microsoft’s quirky tutorial series on Sway, one finds out that Sway is intended to be a presentation tool. This November 2014 article by Business Insider suggests that Sway could eventually replace PowerPoint, but with the addition of the “Mix” extension to PowerPoint, it does not seem as though that is in Microsoft’s plan at this point in time. In their tutorial series, they suggest using Sway to collect information on the weather, images from a vacation, or even to create a newsletter. In education, this means that we can take these suggestions a few steps further to enhance the classroom experience.
Educators could use Sway to create:
- a Smart Board/E-Beam/Promethean Board center.
- During centers day, the interactive whiteboard could be its own interactive station where students can explore the content.
- This could draw students of all ages into the content by having them interact with maps, videos, images, and documents embedded into the Sway presentation.
- an interactive newsletter to email to parents.
- Embed the PowerPoints from the week, pictures of students hard at work, and include PDF files of the required assignments.
- The Sway newsletter could, in this case, be used to help students who have been absent by providing a collection of the week’s lesson materials. It can also serve to keep parents informed of what content has been shared in class.
- an interactive activity for a BYOD (Bring Your Own Device) lesson.
- The inclusion of videos, images, and documents will provide students with background information on a topic.
- Social Studies and English teachers can make a Sway to create an interactive DBQ (document based question)/LBQ (literature based question) prompt.
- Teachers of all grade levels could use a Sway presentation as an interactive lesson overview for previewing or reviewing purposes.
- a presentation of student growth per unit or per school year.
- A Sway presentation could be made to showcase data in an interactive format.
- With end-of-the-year teacher evaluations, one could include charts, graphs, or spreadsheets with this information.
How could Sway be used within the halls of your school?
Office 365 offers untold treasures beyond cloud storage for documents and spreadsheets. In fact, with it and updating to Office 2013, there are many new programs, and program extensions that have great utility in schools.
Enter: Office Mix.
Office Mix is an extension to PowerPoint which means that once downloaded, it adds enhanced features to your existing PowerPoint program. With Mix, which is represented by a button on the “ribbon” or toolbar at the top, you can add a variety of items to PowerPoint.
With Mix you can:
- embed videos (without it just being a link or having to save the video to your computer),
- record screencasts,
- create recorded screen drawings (think Khan Academy),
- use your webcam to record, and
- include interactive questions, quizzes, or polls.
Mix includes its own version of an app store with the interactive features you can embed into your PowerPoint. Once Microsoft adds more to the current offerings, this program will certainly prove even more useful than it already can be.
How can it be used to enhance your lessons?
With Mix you can:
- conduct remediation/review,
- create flipped lessons/blended learning opportunities,
- illustrate a process for more self-directed lessons, and
- create alternative presentations (for student projects, or teachers).
Once again, the possibilities are seemingly endless. How do you see yourself using Office Mix in your classroom?
Originally posted on my team’s blog. | OPCFW_CODE |
Hotspots: Admin Pages | Turn-in Site |
Current Links: Cases Final Project Summer 2007
Discussion 2 - Ejike Onyekwuluje
PART1: How do design patterns and pattern languages differ? What is the use of each?
Regardless of whatever we do, problems will always exist, and sometimes these problems may appear over and over again. As people monitor these problems, they acquire knowledge about them, and so they begin to document how the problems do occur and how such problems could be solved. The idea behind design patterns therefore, is to be able to use the acquired knowledge to state the problem, context and solution, so that other people who are less experienced will benefit from this knowledge.
In the article that I read, the author defines patterns as “…forms for describing architectural constructs in a manner that emphasizes these constructs’ potential for reuse. They provide a way to document and share design expertise in an application-independent fashion” (Steve Berczuk). An interesting example that is given in the article, which relates to software development, is the case where independently developed software systems often share common elements of the same architectural structure. According to the author, checking for a nonnull pointer after allocating an object with new in C++ is a design pattern that most other programs have borrowed, and that patterns like that are discovered by experience. By using patterns, ready-made solutions that can be used to solve different problems are easily made available. The ready-made solutions are possible because we are able to document these patterns and the relationship they have amongst them.
A pattern language is a way to bring together a number of these patterns that we are able to identify in one particular field. In fact, by documenting these patterns, we are making it possible to attempt to reproduce all of the knowledge needed to develop quality items in that field. “A pattern language is a set of patterns that guide an architect through a design. Each pattern is a description of a solution to a problem using other patterns that occur in the system. The details of the form vary, but the essential elements are context, problem, and solution” (Steve Berczuk). This definition truly indicates that having a pattern language guides a designer by providing feasible solutions to most of the problems known to occur in the course of design. It enables us to develop software, which is usable and maintainable.
However, programmers need to be aware that documenting patterns does not simply create pattern languages. Some sort of relationship must be available amongst patterns for a pattern language to exist. “Linkages between patterns are critical for a set of patterns to become a language, rather than a collection of isolated standalone ideas for design” (Christopher Alexander).
PARTII: Describe an interesting pattern
The Singleton pattern is one that ensures that a class has only one instance and provides a global point of access to that instance. This is useful when exactly one object is needed to coordinate actions across the system. As an example, assume you want to implement a class called Printer, with just one instance of the Printer class at a time. This implementation would prevent multiple processes from trying to control the printer at a time. However, you might allow multiple processes to submit print jobs to a queue to be printed in turn, but you would not want two processes changing basic printer configurations in the middle of a print job.
An interesting thing to note about implementing singleton is that singleton does not necessarily mean that applications need just one instance to be created. Instead, it simply wants just one instance, and so most programmers find it convenient to use singleton pattern since it provides an easy way to do the cached lookup for cached instance that is easily locatable.
What interests me about this pattern is that when implemented, the class will have direct control over how many instances can be created; instead of making the programmer responsible for insuring that only one instance exists.
Christopher Alexander, “Anatomy of a Pattern Language” [http://www.designmatrix.com/pl/anatomy.html]
Steve Berczuk, “Finding solutions through pattern languages” [http://www.berczuk.com/pubs/Dec94ieee.html]
“Guidelines, Patterns, and Code for end-to-end java applications” [http://java.sun.com/blueprints/patterns/]
Paul Kimmel “Implementing the singleton pattern” [http://www.developer.com/xml/article.php/972041]
Links to this Page
- Ejike Onyekwuluje last edited on 9 December 2005 at 8:20 am by cache-rtc-ae04.proxy.aol.com
- Fall 2005 Discussion 2 last edited on 3 October 2005 at 4:29 pm by adsl-068-209-116-021.sip.asm.bellsouth.net | OPCFW_CODE |
How much work to get Episerver to leave ID-attributes alone?
Is there a way to make Episerver leave the HTML id attribute alone and more importantly how much work is that?
I know you could also remove the viewstate, how much work is that?
I'm not here to start a discussion about semantics and optimization, whether or not a CMS should touch the front-end code is a long debate. I just need to know how difficult these adaptions are.
Please post more concrete examples of problems with ID-tags or View State if you want more specific suggestions on how you could workaround them!
The problem with generated IDs is that they mess upp the semantics of the HTML-document and makes use of IDs in stylesheets more of a hassle to use, since you can't depend on an ID for specificity because it's (often) in constant change under development (and with future releases/updates).
The wrapping of the entire document in a form-element may not be a problem (other than semantic) if you enable the form to support both post-back and ajax through progressive enhancement (hijax).
EPiServer Web Controls are developed to work with the ASP.NET WebForms framework and you have limited control over generation of ID-tags in some cases. It is better if you use dotnet 4.0 which is supported in EPiServer CMS 6.
It is a lot of work to eliminate all bad html generated by WebForms Controls completly. You will end up rewriting everything and loose a lot of ASP.NET built-in functionality. If you use WebForms it is probably better to be pragmatic and more cost effective and accept ID-tags and a small view state.
A common approach to get rid of view state is to remove the global form-tag used by ASP.NET. A known side effect is that the right-click menu in view mode used by editors stops working and also some common third party modules will also stop working as expected since they use the form-tag to inject javascript. You will also get issues with XForms.
If you want better control of the generated html render your page the MVC way using your own extension method that extracts values from EPiServer properties.
MVC is not yet supported by EPiServer CMS 6 but will nicely integrated in a future release.
Thanks! it is as I thought then. Or have experienced.
If a Control or module generates bad markup, I would say it should be rewritten any way, or not used at all. It think it's not right or fair to dump bad markup in the face of the user just because you want to use pre-existing modules. If you continue to support bad markup that way, we support irresponsible web development.
I guess you are only talking about the templates?
The hard part is rewriting the forms support for XForms made in the form designer.
You also need to drop the edit-on-page functionality.
You might also want to override some things in the most used controls such as EPiServer:Property but other than that it's just to NOT put any server forms in the template code and you won't have any problems with ASP.NET garbage markup.
| STACK_EXCHANGE |
Backup Troubleshooting - SQL Server iDataAgent
The following section provides information on troubleshooting backups.
|Database name contains [ and ] brackets||The use of embedded brackets '[' and ']' in database name may result in backup failures.|
|All data paths for the subclient are offline or busy|| This error may be displayed if the Override Datapaths option is selected in the Data Paths dialog box in the Subclient Properties for a Log Storage Policy. This results in the Transaction Log backup operation waits for resources.
To work around this issue, deselect the Override Datapaths option.
|Time Out Failures||The default time allocated for backup and restore operations of SQL databases is 0 (infinite). If a backup or restore operation fails due to a timeout being reached, you can configure the nSqlQueryTimeout registry key to increase the amount of allocated time for backup or restore operations.|
|SQL Server jobs that cause backups to terminate|| There are a few jobs that SQL Server restricts during a backup. If one of these jobs are initiated while a backup is already in progress (or if a backup is initiated while one of these jobs is in progress) the backup job will terminate. These jobs are:
|Backup chain is broken|| When a full or differential backup is performed outside of the system, for example, from SQL Enterprise Manager, the subsequent log backups performed using SQL Server iDataAgent is set Do not convert log backups to full if log backup was performed using other software in the Subclient - Backup Rules tab.
Make sure to enable the Disable Log Consistency Check in the Subclient - SQL Settings tab to ensure that the backup job completes successfully.
Completed with one or more errors
Backup jobs from Microsoft SQL Server iDataAgent will be displayed as "Completed w/ one or more errors" in the Job History in the following cases:
- When a subclient which contains multiple databases is backed up, if one of the database gets deleted from the sql server, then that database is not backed up and the remaining databases get backed up.
- For a default subclient, if all the databases that are its content are auto discovered, then even if one of the database has been deleted from the sql server, the job completes successfully as the database that has been deleted from the sql server will be removed from the default subclient content. But if the database that has been deleted from sql server is part of the default subclient, then the database is not removed from the subclient content.
- When a subclient which contains multiple databases is backed up, if one of the database is not backed up due to reasons like, database is in standby mode or database got corrupt etc, then job completes w/ one or more errors. The databases that failed will be shown as part of the failed items and those that were backed up will be shown as part of the successful items.
- When running a backup, a check is made to verify if the backup is restorable. If the log chain is broken (e.g., when a log backup is ran outside of the software) or if there are no full backups for a corresponding differential backup then the backup of the database fails and the job will complete with errors. A Job Pending Reason (JPR) explains why the backup failed. In the next backup attempt, this database will be backed up as a full database. An alert can also be configured for this job.
- A SQL backup job for a subclient with multiple databases will not retry backing up a single database if it fails. However, the job status will be displayed as Completed With Errors.
If the job goes into pending state, the job will restart from the point where it failed and if an attempt to back up the failed database has already been made, another attempt will not be performed.
- For databases that are manually defined in a subclient but are inaccessible (e.g., it is not recognized, has been deleted, etc.), the job status for the backup will be displayed as Completed With Errors.
An event will be created for the inaccessible database during backup. If the inaccessible database is not needed, it can be permanently deleted from the subclient content.
- When a backup is run, a check is made to verify if the backup is restorable. If the log chain is broken (e.g., when a log backup is run outside of the software) or if there are no full backups for a corresponding differential backup then the backup of the database fails and the job will be displayed as Complete With Errors.
A Job Pending Reason (JPR) explains why the backup failed. In the next backup attempt, this database will be backed up as a full database. An alert can also be configured for this job. | OPCFW_CODE |
Not sure if you have noticed this, but I usually test new features and fix bugs directly on the live bot itself (Which probably explains why the bot goes offline for quite some time sometimes). This means that any changes I make would cause the bot to restart with those new changes. This is all well and good for fixing bugs, but not so good if I create a new feature that has bugs everywhere. Trying to run a separate local copy of the bot proved to be an issue as some of the methods I use to get bot statistics only works in Linux. This update should fix these issues and allow it to be able to run on any OS.
Some other noticable changes include ,stats is now an alias to ,statistics, and ,remindme can now be called with just ,rm.
- Renamed ,stats command to ,statistics
- Changed ‘CS Pound Memory Usage’ to display in MB instead of a percentage
- Improved response time of ,statistics
- Added ,rm alias to ,remindme command
- Added ,stats alias to ,statistics command
- Fixed ‘System Memory Usage’ not displaying when running on OSX
- Fixed ‘CS Pound Memory Usage’ not displaying when running on OSX
- Fixed ‘CS Pound Uptime’ not displaying when running on OSX
,statistics ,stats ,rm <Xh|Xhr|Xm|Xs>
At the moment the only statistical information the bot has is the current version of the bot, displayed in the playing status. Some of the other bots I have been looking at have a specific command that shows many of the bot’s statistics, such as CPU/RAM usage, server and user counts. So now I have created my own ,stats command, and moved the bot version there, along with some other information.
Also a small problem I have seen with users setting Remind Me’s is typing ‘hr’ instead of just ‘h’ for hours. It has now been updated to work for both ‘hr’ and ‘h’!
- Added ,stats command
- Fixed ‘hr’ not working with ,remindme command
,remindme <Xh|Xhr|Xm|Xs> ,stats
Finally an update that isn’t a small patch! I would like to reveal the new command, ,pet2! Just ignore my horrible naming creativity and hear me out. A problem with the current ,pet command is that the embed that it sends isn’t something that you can just glance at and understand, it takes a bit of time to read the fields. This new beta ,pet2 command hopefully fixes this. Rather than trying to accomodate to Discord’s style, it would send an image that replicates what you would see on the pet page. The only problem to solve now is how to make the owner and given by link clickable…
In other news, the new ,pet2 command as well as ,pet will now display error messages if the link to the pet that you sent is invalid, rather than just not reply with anything.
- Added ,pet2 command
- Added error messages for ,pet and ,pet2
,pet2 <Pet URL> | OPCFW_CODE |
package projects.cardriver.app;
import javafx.scene.control.Alert;
import javafx.scene.input.KeyCode;
import javafx.stage.FileChooser;
import javafx.stage.Stage;
import neuralnetwork.NeuralNetwork;
import neuralnetwork.datautils.Utils;
import projects.cardriver.controllers.NNCarController;
import projects.cardriver.controllers.UserCarController;
import projects.cardriver.entities.Car;
import projects.cardriver.entities.Track;
import javafx.animation.AnimationTimer;
import javafx.application.Platform;
import javafx.fxml.FXML;
import javafx.scene.canvas.Canvas;
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.layout.AnchorPane;
import javafx.scene.paint.Color;
import javafx.scene.text.Font;
import javafx.scene.text.TextAlignment;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
/**
* The main JavaFX controller class.
* Handles the game loop, rendering calls and input events.
*
* @author Niklas Johansen
* @version 1.0
*/
public class AppController
{
private static final int CARS_PER_GEN = 25;
private static final int TRACK_LENGTH = 10000;
@FXML private AnchorPane anchorPane;
@FXML private Canvas canvas;
private List<Car> cars;
private Track track;
private Camera camera;
private CarBreeder carBreeder;
private NetworkGraph networkGraph;
private UserCarController userCarController;
private double currentHighestFitness;
private double highestOverallFitness;
private int simulationSpeed = 1;
private int finishedCars;
private boolean runSimulation;
/**
* Called when all FXML elements are loaded.
* Instantiates local objects, adds event listeners and cars.
*/
@FXML
public void initialize()
{
this.cars = new ArrayList<>();
this.track = new Track(TRACK_LENGTH, 1536032139186569216L);
this.camera = new Camera(canvas);
this.carBreeder = new CarBreeder();
this.networkGraph = new NetworkGraph();
this.runSimulation = true;
addEventHandlers();
addAndPreTrainCars(30,0);
//addUserControlledCar();
}
/**
* Updates the main game logic.
* Trains the cars by letting them drive, tracking their progress and
* breeding new generations based upon the cars fitness score.
*/
private void updateGameLogic()
{
currentHighestFitness = 0;
finishedCars = 0;
for(Car car : cars)
{
car.drive(track);
if(!car.isFinished() && car.getFitness() > currentHighestFitness)
{
currentHighestFitness = car.getFitness();
highestOverallFitness = Math.max(highestOverallFitness, currentHighestFitness);
camera.track(car);
}
if(car.isFinished())
finishedCars++;
}
if(finishedCars > 0 && finishedCars == cars.size())
cars = carBreeder.getNextGeneration(cars, CARS_PER_GEN, false);
}
/**
* Renders the cars, the track, the text and the network graph.
* The car with the highest fitness will be tracked by the camera.
* If a user controlled car is added, this will be tracked.
*/
private void renderScene()
{
track.render(camera);
for(Car car : cars)
{
car.render(camera, (car == camera.getTrackedCar()));
if(!car.isFinished() && car.getController() instanceof UserCarController)
camera.track(car);
}
camera.update();
renderGraphAndInfoText();
}
/**
* Renders the network graph and informational text about the simulation.
*/
private void renderGraphAndInfoText()
{
int graphBottom = 25;
Car car = camera.getTrackedCar();
if(car != null && car.getController() instanceof NNCarController)
{
networkGraph.render(((NNCarController)car.getController()).getNeuralNetwork(), canvas, 40, 40);
graphBottom = networkGraph.getHeight() + 40;
}
GraphicsContext gc = canvas.getGraphicsContext2D();
gc.setFill(Color.gray(0,0.75));
gc.setFont(Font.font(24));
gc.setTextAlign(TextAlignment.LEFT);
gc.fillText("Simulation speed: " + (runSimulation ? simulationSpeed : 0),15, graphBottom + 10);
gc.fillText("Driving cars: " + (cars.size() - finishedCars), 15, graphBottom + 40);
gc.fillText("Generation: " + carBreeder.getGeneration(), 15, graphBottom + 70);
gc.fillText("Top fitness: " + (int)highestOverallFitness, 15, graphBottom + 100);
gc.fillText("Fitness: " + (int)currentHighestFitness, 15, graphBottom + 130);
}
/**
* Sets up the main game loop and adds event handlers for mouse/key input and window resizing.
*/
private void addEventHandlers()
{
// The main game loop
new AnimationTimer()
{
@Override
public void handle(long now)
{
for(int i = 0; i < simulationSpeed && runSimulation; i++)
updateGameLogic();
renderScene();
}
}.start();
// Event handler for scroll wheel
canvas.setOnScroll(event -> camera.zoom(event.getDeltaY() / 1000.0));
// Event handler for all key events
Platform.runLater(this::addKeyEvents);
// Event handlers for window resizing
Platform.runLater(() ->
{
anchorPane.getScene().widthProperty().addListener((o, oldVal, newVal) -> canvas.setWidth(newVal.intValue()));
anchorPane.getScene().heightProperty().addListener((o, oldVal, newVal) -> canvas.setHeight(newVal.intValue()));
});
}
private void addKeyEvents()
{
// Event handler for key presses
anchorPane.getScene().setOnKeyPressed(event ->
{
if(userCarController != null)
userCarController.ketInput(event, true);
KeyCode code = event.getCode();
if(event.isControlDown())
{
// Saves the network of the tracked car to a file
if(code == KeyCode.S)
{
Car bestCar = camera.getTrackedCar();
if(bestCar != null && bestCar.getController() instanceof NNCarController)
Utils.exportNetwork(((NNCarController) bestCar.getController()).getNeuralNetwork(), "CarDriver");
}
// Opens a file chooser to load exported networks
else if(code == KeyCode.O)
{
List<File> files = (new FileChooser()).showOpenMultipleDialog(null);
if(files != null && files.size() > 0)
for(File f : files)
addCarFromFile(f.getAbsolutePath());
}
// Resets all training with new cars
else if(code == KeyCode.R)
{
cars.clear();
carBreeder.resetGenerationCount();
addAndPreTrainCars(CARS_PER_GEN, 0);
}
// Clears all cars
else if(code == KeyCode.C)
{
cars.clear();
}
// Creates a new and random track
else if(code == KeyCode.T)
{
for(Car car : cars)
car.reset();
track = new Track(TRACK_LENGTH);
}
// Adds or removes a user controller car
if(code == KeyCode.U)
{
if(userCarController == null)
addUserControlledCar();
else
{
userCarController = null;
for(int i = 0; i < cars.size(); i++)
if(cars.get(i).getController() instanceof UserCarController)
cars.remove(i--);
}
}
// Fullscreen
else if(code == KeyCode.F)
{
Stage stage = (Stage) anchorPane.getScene().getWindow();
stage.setFullScreen(!stage.isFullScreen());
}
}
// Starts/stops the simulation
if(code == KeyCode.SPACE)
runSimulation = !runSimulation;
// Simulation speed keys
else if(code == KeyCode.PLUS || code == KeyCode.ADD)
simulationSpeed++;
else if(code == KeyCode.MINUS || code == KeyCode.SUBTRACT)
simulationSpeed = Math.max(1, simulationSpeed - 1);
else if(code == KeyCode.NUMPAD0 || code == KeyCode.ENTER)
simulationSpeed = 1;
});
// Event handler for key releases
anchorPane.getScene().setOnKeyReleased(event ->
{
if(userCarController != null)
userCarController.ketInput(event, false);
});
}
/**
* Adds the first generation of cars.
* The cars can be pre-trained before the real-time simulation starts.
* @param nCars the number of cars in the first generation
* @param nGameLoopsWithPreTraining the number of game loops to pre-train the cars
*/
private void addAndPreTrainCars(int nCars, int nGameLoopsWithPreTraining)
{
for(int i = 0; i < nCars; i++)
cars.add(new Car(0,0, new NNCarController()));
if(nGameLoopsWithPreTraining > 0)
{
for(double i = 0, topFitness = 0; i < nGameLoopsWithPreTraining; i++)
{
updateGameLogic();
// Prints the progress to the console.
if(highestOverallFitness > topFitness)
{
topFitness = highestOverallFitness;
System.out.println("Gen: " + carBreeder.getGeneration() + " - " +
(int)(Math.min(1.0, topFitness / track.getLength()) * 100) + " % of track learned");
}
}
}
}
/**
* Enables previously trained cars to be loaded from files.
* @param filename the file containing the trained neural network
*/
private void addCarFromFile(String filename)
{
NeuralNetwork network = Utils.importNetwork(filename);
if(network != null)
this.cars.add(new Car(0,0, new NNCarController(network)));
else
(new Alert(Alert.AlertType.ERROR, "The selected file could not be loaded")).showAndWait();
}
/**
* Adds a user controlled car to the list of cars
*/
private void addUserControlledCar()
{
this.userCarController = new UserCarController();
this.cars.add(new Car(0, 0, userCarController));
}
}
| STACK_EDU |
Your Pain Is Our Pleasure
24-Hour Proofreading Service—We proofread your Google Docs or Microsoft Word files. We hate grammatical errors with a passion. Learn More
“This knife has dual purpose.”
Do I need to pluralize “purpose”? After all, the statement is saying that it has more than one purpose, namely two purposes.
or fill in the name and email fields below:
"This knife is dual-purpose."
"This knife has dual purposes," is also acceptable.
The correct way to say it is: "This knife has a dual purpose." You could also say, "This is a dual-purpose knife.
It could be possible that the knife has dual purposes, but that implies that the knife has more than one dual purpose. I suppose in that case one is more apt to say, "This is a multi-purpose knife."
To be specific:
Dual has two main definitions1. Having related/similar partsIn this case you often would use the plural."My computer has dual processors."
2. Having a double purpose or roleHere you almost use the singular; a dual purpose is a double purpose purpose. :)eg dual citizenship, dual nature, dual truth
Rufus is right. Vid: This knife has dual purposes. This knife is dual purpose. A knife of a different color. While potentially correct and illuminating IngisKahn's answer is reductio ad absurdam, and multipurpose knives are not under discussion. American English is a living language and what feels correct is generally acceptable, as long as subject, verb and object are in some semblence of harmony. I won't discuss British English.
Perhaps my response seemed to exclude Rufus's and adaiha's examples as correct. They are both correct; I just went on to iterate what you actually mean if you use the plural. That is, what someone with perhaps a more logical bent would think you mean. ...Though it appears that esc has something against logic and reducing to an absurdity :)
BTW, I would tend to consider a dual purpose knife to be just a underachieving multipurpose knife :)
Although I suppose this is merely dodging the answer and doesn't really answer your question, if you are really perplexed, you can also say this:
"This knife has two purposes."
Again, I agree with IngisKahn. I personally consider "the knife has a dual purpose" the most accurate answer. I believe "dual" is an adjective describing a singular entity with two parts. It's like using the noun "couple." For example, think of "a couple of lovebirds." The noun "couple" refers to one and only one set of two lovebirds.
I don't think "dual purposes" is necessarily wrong, but I don't think it fits as well for your example. I think "dual purposes" would be more accurate when in the subject of the sentence, rather than the predicate. Example: "the dual purposes of my computer are to communicate with others and play video games."
Also, I think saying "this knife is dual-purpose" sounds like you are trying to sell a knife. It describes how many things it can do, rather than why it exists.
The knife has two purposes. It is dual-purpose. An object having "dual purposes" is awkward English.
Do you have a question? Submit your question here
©2024 CYCLE Interactive, LLC.All Rights Reserved. | OPCFW_CODE |
Replace special character php
I have a problem right now. When someone is ordering something which contains the character "æ", it make it into "à¦", which destroys the mysql query, and ends the sentence there. For an example i got this:
#55*2*195*1 - 1,%%%%38. Burger dobbelt %%%%Kommentar%%%%Burgeren skal và¦re med friske agurker i stedet for syltede%%%%og uden ost. Pॠforhà¥nd tak %%%%%%%%100. Lasagne %%%%Dressing%%%%Ingen dressing,*;;124.20;;Niklas Smietana;;;;7;;*#
But when it insert it into the database, it ends up being like this:
#55*2*195*1 - 1,%%%%38. Burger dobbelt %%%%Kommentar%%%%Burgeren skal v
It just ends there.
So what i want to do is that i want to replace every special characters like "æ", "ø", "å", "ü", "ö" and so on, in the "string", so it becomes "ae", "oe", "aa", "u", "o".
I have tried with str_replace but it wont do it.
My code:
$product_name = $row['product_names'];
$product_name = str_replace("ø", "oe", $product_name);
$product_name = str_replace("É", "É", $product_name);
$product_name = str_replace("Ã", "à", $product_name);
$product_name = str_replace("¿", 'oe', $product_name);
$product_name = str_replace("¾", 'ae', $product_name);
$product_name = str_replace("æ", 'ae', $product_name);
$product_name = str_replace("Œ", 'aa', $product_name);
$product_name = str_replace("å", 'ae', $product_name);
$product_name = str_replace("š", 'oe', $product_name);
$product_name = str_replace("Ÿ", 'u', $product_name);
Do anybody in here have a solution on that?
Thanks in advance.
Whats your charset of the file and collation in db?
See UTF-8 all the way through
You should use mysql_real_escape_string() or mysqli_real_escape_string() to prevent mysql injections and to escape special characters.
Could you please give an example, because when i try to:
$product_name = utf8_encode($product_name);
it just edits the line into:
#55*2*195*1 - 1,%%%%38. Burger dobbelt %%%%Kommentar%%%%Burgeren skal và ¦re med friske agurker i stedet for syltede%%%%og uden ost. Pà ¥ forhà ¥nd tak %%%%%%%%100. Lasagne %%%%Dressing%%%%Ingen dressing,*;;124.20;;Niklas Smietana;;;;7;;*#
@IsmailIsmaiil I edited my post and deleted the utf8_encode answer I had written.
| STACK_EXCHANGE |
#include <cppunit/TestFixture.h>
#include <cppunit/extensions/HelperMacros.h>
#include "test-lib-data-arrayof.h"
CPPUNIT_TEST_SUITE_REGISTRATION( ArrayOfTest );
void ArrayOfTest::setUp() {
}
void ArrayOfTest::tearDown() {
}
void ArrayOfTest::testCreate() {
bo::data::Array::AP array(new bo::data::ArrayOf<signed char, 64, 2>());
CPPUNIT_ASSERT_EQUAL((unsigned short)2, array->getNumberOfChannels());
CPPUNIT_ASSERT_EQUAL((unsigned int)64, array->getNumberOfSamples());
}
void ArrayOfTest::testSetAndGet() {
bo::data::Array::AP array(new bo::data::ArrayOf<signed char, 64, 2>());
for (unsigned short channel = 0; channel < array->getNumberOfChannels(); channel++) {
for (unsigned int sample = 0; sample < array->getNumberOfSamples(); sample++) {
array->setInt((signed int)channel + sample * array->getNumberOfChannels(), sample, channel);
}
}
for (unsigned short channel = 0; channel < array->getNumberOfChannels(); channel++) {
for (unsigned int sample = 0; sample < array->getNumberOfSamples(); sample++) {
signed char expectedValue = channel + sample * array->getNumberOfChannels();
int value = array->getInt(sample, channel);
CPPUNIT_ASSERT_EQUAL((int)expectedValue, value);
}
}
}
| STACK_EDU |
Comments on: The Bind? function returns the context of a word.
REBOL/Core 2.6.2 and /View 1.3.2 include a new function called bind? that is the half-sister of the bind function. Bind? tells you if a word is bound (word has a context) and returns the context (as an object, currently - see note below).
So, what's it good for? Here is an example:
words: first :append [series value /only] get first words ** Script Error: series word has no context
The error occurs because the words that represent the arguments of the append are not bound (have no context) within the block. They are unbound. But, with bind?, you can check a word before you get it:
if bind? first words [probe get first words]
Of course, this is more of an expert line of code. But, there is another use for bind? that even beginners will find helpful. Bind? returns an object that tells you about the context of the word provided. For example:
view layout [ toggle "Test" [ probe first bind? 'value ] ] [face value]
This example shows you how to obtain information about the context of the toggle action (which is an unnamed function called by the VID toggle style). It shows that there are two local variables, face and value that can be accessed within the action block.
This result comes from the fact that bind? can return the context of a function. This code helps explain it:
amplify: func [value /gain n] [ probe first bind? 'value probe second bind? 'value if not gain [n: 10] return value * n ] amplify 10 [value /gain n] [10 none none]
You can see that the bind? function returns an object that contains the names and values of the arguments and refinements of the function. That information can be quite useful if you are trying to systematically deal with functions that have a lot of refinements.
And finally, as you would expect, bind? can be used for objects as well. Here is an example:
obj: make object! [ a: 10 b: "test" c: now ] blk: [a b] blk: bind blk obj probe blk [10 "test"] print second bind? first blk a: 10 b: "test" c: 9-Dec-2005/9:03:36-8:00
Here bind? returns the context of the a word (the object in which it is bound).
Post a Comment:
You can post a comment here. Keep it on-topic. | OPCFW_CODE |
How can I control the arduino interface using lua
I have an arduino galileo board, which I'm running using Intel's image on a micro-sd card.
I already manage to run basic Lua scripts on it.
I want to run a Lua script on the board (Intel's image) and interact with the arduino interface - for example be able to turn on a led or read sensor data. This is very simple to do when using sketch directly, where you have straight forward API to turn on specific pin that is connected to a led. Same goes for reading input from a pin (check if sensor is sending data).
Is there a Lua library that has such access to the pins? or should I somehow connect the Lua script to the Arduino API?
The script will already run on the board.
Thanks.
what you want to do is similar to the Firmdata; it is a processing and arduino sketch that will use arduino as a mere "executor" of a pseudo language over serial.
That means many arduino command are mapped to a specific serial command, for example 'aX' may means do a digitalRead, where X is the pin number, 'bX' do an analogRead and so, obviusly arduino will then send back the reading to your host.
Drawback are that you are limited by serial (or any other bus) throughput. That means, if you want to just fast-prototipe something, it it a good solution, but when you need to code time-sensistive (or specialized) code, then you need to create your own function, called by your own command, witch probabily as a custom response.. pratically you are writing a custom program, and the ardiuno (and LUA) sketch become a mere string parser.
On galileo, the arduino is connected by serial port, as it is needed for sketch upload, so as long as LUA give you some library to manipulate serial port, you are good to go for this solution.
I have an arduino with web connectivity, it is completely independent and runs the OS through the SD card (and available to me through SSH). It is not connected to anything physical. I have a lua script running on the device and want this script to perform actions like turning on LED or read and send (to my server on the internet) measures from a sensor. The thing I couldn't figure out is how I can access the PINs from the lua script. Is there any lua api that allows me to access these PINs (like I do using sketch)? or can my lua script run a sketch that does that?
your arduino has 2 component; the os chip and the atmega chip. They are connected only by serial bus. You have to write your own lua script that ask by serial the reading to the atmega,then you have to write a sketch on the atmega witch read the serial iterpretate the command, execute it and then send it back. There is a library that do this, is called firmdata but is processing (derived of java)
| STACK_EXCHANGE |
Ubuntu Questions and Answers
Post Your Question
Post Your Question
rcraftemur - profile page
· Member Rank and Points
Tuesday, December 5, 2023
Total Points: 118
Total Questions: 119
Total Answers: 144
Location: Turks and Caicos Islands
Member since Sun, Mar 7, 2021
3 Years ago
rcraftemur don't have any followers yet
System Monitor and top reporting wildly different memory usage
Apply transparent background in GIMP
Where is Gmail Archive option in Evolution?
How do I run Windows 8 via virtualization?
I cannot create ad-hoc wireless networks and stay connected. How can I fix this?
How to show java plugin console
Incorrect keypresses on Lenovo y560p
How come 11.10 doesn't come with some of the applications from 11.04?
403 Forbidden problem with Apache
Are there any other sources for Ubuntu 9.10?
Is there a simple, plaintext, lightweight hosted web-based text editor?
How can I fix my keyboard layout?
Can I run SQL Server on Ubuntu?
Default filesystem permissions and access rights in 12.04?
Getting the PC speaker to beep
Automatically mark emails as read in Evolution 3.2.2?
Screen resolution and multiple monitors Kubuntu 11.10 Asus U46E BAL7
How can I see my files in 2nd partition of hdd?
How to set up passwordless SSH access for root user
How do I enable 132x50 text mode for console instead of 80x25?
Why does Ubuntu's default screenshot app not work properly?
is there a simple way to check a live CD for errors?
Can't upgrade 12.04 from 11.04
How do I clean or disable the memory cache?
Installing ubuntu 12.04 on macbook pro9,2
How to create user account via a bash script?
Why can't mount read files in "/etc/fstab.d/"?
Rename localhost folder. Gives empty screen
Are there any differences in graphics drivers from the "X-Swat" vs. "xorg-edgers" PPAs?
Where is the documentation for the .desktop unity launcher file format?
How to control an Ubuntu PC from another Ubuntu PC over Internet, using mobile broadband connections?
How to change settings in Splice?
How can I remove the items form grub boot list
convert video file to .ogg
Jabber client with support of Gtalk XEP-136
Is that possible to install language pack offlinely?
Remove Evolution integration from Gnome 3
Create table in iptables
find out the location of where a process was executed
Eve Online + Wine 1.6 (U 13.04): Offline mode
Can't install bitcoind in upgraded system due package conflicts
No internet connection on Ubuntu Server 12.04 LTS
Setting up a UK keyboard layout
Juju remove units stuck in dying state so I can start over?
Running ePsxe 1.9 on Ubuntu 13.10
What is the meaning of "ps -aef | grep $(pwd)" command?
how to always use rgrep with color
wget -O command not found
It is possible to run a Exe file in RedHat
Help identify confusing way Maven is running with no M2_HOME?
Configure a cron job to run only if laptop is plugged in?
Can use Unetbootin to install other Linux Distributions from within Ubuntu (like Fedora or Mint)?
"allrequestsallowed.com"... Hack attempt?
Which has better compatability, NVIDIA or Intel Graphics?
How to recover from the Dash Home appearing behind other running programs?
How to change default launchers in the unity dashboard?
how do I make nautilus to automatically suggest the folder 'Documents' for pdf files?
Acer Timeline X 3830TG Battery life
Nautilus: "Cannot load supported server method list. Please check your gvfs installation"
What changes does Unity's HUD offer
Is it safe to upgrade my web server from Ubuntu 11.04 to 11.10?
How do I install different (upgrade or downgrade) PHP version in still supported Ubuntu release?
Easiest way to accomplish dual boot goal
Can't use any kind of "super" key combinations for keybindings in Ubuntu 12.04 from the control center
12.04 rebooting after suspend to disk
How do I set up a linux proxy for my mac
cannot play VTC video file (.mov) using VLC player
Is there a native Picasa version for Linux? How can I install it?
How to remove desktop environments?
Where is the information about free space in a disk stored?
Run ubuntu applications from windows
Add console/text booting mode to grub menu
How to display free disk space in Lubuntu/LXDE?
HUD is not working after upgrade
How to reject an EULA when installing packages with apt?
Virtual machine hangs when I use Ubuntu
Black screen after installation of a command-line version of ubuntu 12.04
Is there a way to make dig report the actual name server rather than 127.0.0.1?
How to get soci.h?
Getting MTP to work with a Galaxy tab 2 7.0?
How do I fix the audio on my laptop. model Fujitsu B6220
C programming on Ubuntu
Mahjongg doesn't display scores
Virtualbox DNS stopped working on upgrade to 12.10
WebEx audio doesn't work on 12.04 LTS 32-bit
Why is fsck not working for me?
converting the OS with all software installed to iso file
Can't update because the word prox* is blocked
Set time limits for specific applications (such as games)
How can I fix errors installing D-Link DWA-121 wireless driver tarball?
Connection timeout for ssh server
How to printk() a s32 kernel data type
Remap shortcut to a single key cannot be used
How can I Resolve dpkg dependency?
1 suspicious file with size 140TB?
Can anyone point me to a guide for installing Ubuntu 12.04 LTS on to a USB Thumb Drive?
Changing distribution and keeping user files
How can I get Amazon Instant Video to work?
How to install VMWare converter on Ubuntu 12.04
Classpath unsets after restarting | OPCFW_CODE |